Over the years I have collected a few mementos of projects I have worked on. Some are product information documents, others are various thank-yous I have received from my managers or peers along the way. Following are a few I got around to scanning in.

Let me apologize for the (dis)organization of this page, as I am adding prose as I find documents to scan. As time goes on the quality should improve.

Web-Based Configuration Management and Build System

Most recently I have been consulting independently. I built a web-based build and software configuration manager for one client. The software product was an embedded Linux test instrument that I cannot discuss due to my intellectual property agreement. This system started with a basic Linux system that then needed to be configured with different device drivers and algorithms depending on the customer needs.

The salesman would access a series of web forms that would allow him to select what options the customer wanted. Rules prevented designing an illegal configuration. The customer information and the configuration design were stored in databases.

A build technician would use build screens to find the pending orders. The build system ran under Apache. Pre-compiled components were merged with components that needed to be created on demand. The result was a Compact Disc image directory that could then be loaded onto the targets system. Source code as well as customer documents were automatically stored to and retrieved from CVS on Linux as needed.

Test Instrument Database Manager

Another client, OpQua, Incorporated, is developing a hand-held test instrument for reading configuration information stored in the EEPROMs in pluggable optical modules such as SFP and XFP modules. Their instrument has a local display that allows a technician to review the contents of the EEPROM including the device manufacturer, model number, serial number, and operating parameters.

The module reader has a serial interface that allows it to be used in a stockroom application where the data output could be stored in a database. My assignment was to create an application that could be used as a suitable for small users, and a proof of concept demonstration for larger customers.

The result was a form-based GUI application supported by and SQL database. The GUI was implemented with Perl/Tk. Perl was chosen since there are a variety of modules available that can be integrated for the various interfaces required for the project and the result can be ported easily between Microsoft Windows (the expected initial execution platform), Linux, and Unix.

The Perl Win32::SerialPort module provided access to the Microsoft Windows serial port drivers. The Tk Dialog, ROText, Scrollbar, Table and other modules provided for screen formatting. The Win32::ODBC module provided a portable connection to the database subsystem using the Microsoft Windows ODBC32 service. MySql was chosen as the primary database since it is available on all anticipated platforms and it supports ODBC. The application also works with the Microsoft Access database through ODBC, but MySql was preferred due to licensing costs.

Web-Based SNMP Network Element Manager

One of the products that I developed at Jedai Broadband Networks was a mini-element manager system for the Jedai FrontRunner 3200 product. The FrontRunner was an access device for optical networks. It has both 10/100BaseT (and the fiber variants) packet data and T1/E1 TDM interfaces on the service side and Gigabit Ethernet ring interfaces on the network side. There could be up to 32 customer side interfaces on the 3200.

The manager was to provide two purposes: it was to support enhanced MIB browsing and also provide calculations and generate provisioning information for proprietary features within the Jedai network.

The design used CGI programs written in Bash (and Korn) shell and C for user interface. These and straight HTML files were served under Apache. Information about the network and the elements was stored in Unix-style databases. When used on the PC platform, the Cygwin package provided the shell required. Network maps and other graphics were created on the fly with the Dot package.

The SNMP drivers were provided by the Net-SNMP package. The proprietary MIBs for the FrontRunner use the Interniche Technologies MIB compiler. This compiler generated C source files used in the FrontRunner 3200 application. I wrote a series of Bash Shell and AwK scripts as well as C programs to convert the Interniche output to a set of machine-generated source files that could be used for a fast web lookup agent. This was essentially a compiled MIB cross-reference engine. To test the engine I created a very simple Command-Line operated MIB browser that used the Net-SNMP libraries for network access. The web version of this tool generated SNMP requests based upon commands passed from controlling web pages through either the environment (QUERY_STRING) or from stdin (Post data), initiated volleys of SNMP requests through Net-SNMP, cooked the responses and generated formatted HTML displays of the result.

A key feature of the system was to maintain knowledge of the network topology and to make provisioning decisions based upon requests for new service installations. This required analysis of the system interconnection map to determine best routes for services to take, and to determine how end and intermediate nodes need to be provisioned.

Terminal Server

To support our lab environment, we needed to be able to communicate with the console ports of the units in the lab from our offices. I created a simple application that would allow you to all serial ports on a Windows computer. We then put a computer in the lab with multiple serial port cards and added my server. The server listened on a particular Socket for remote connection requests from applications such as Hyperterm or Xterm. The application would determine what serial ports were available at the time, and display a table on the caller's terminal. The caller could then select a port and speed. The application would then fork a co-process to serve communications between the serving computer and the clients.

Boot and Control of SONET Multiplexers

The initial Jedai alpha products used a SONET backbone. Three products were defined: an OC12 to 100BASE-T edge device, and OC12 to OC3 edge device, and and OC48 to OC12 core device. The first was designed and built by a partner company and they were responsible for providing the operational code. The second was designed by an outside company, but we were responsible for developing the code. The third was designed entirely inside the company.

I took responsibility for delivering the products to the alpha trial per the requirements of our business plan. I studied the hardware for the last two products to determine what code was required for operation in the alpha trial. Since we had yet to hire programming staff, I had to design and plan in such a way as to handle unknown human resources. I designed a system that booted from flash, could store operational parameters in a simple flash file system, and allowed setting and saving of SONET cross connect assignments. I guided and trained new employees as they were added to the project.

These devices use the Motorola PowerPC 860 processor. Initial functionality did not require an operating system, though later on in the project as time permitted a VxWorks BSP was created and the application was ported to VxWorks.

Configuration Management, Source Code Control, Build Management

Rolling off the alpha assignment, I took the role of building staff skills as the software development department grew. In order to help the team work well together, I implemented and deployed software tools. Not all of the development was done with the VxWorks IDE, several different code editors were used. As a common denominator, the Cygwin tools were deployed to all PCs. This gave us a Linux/Unix style fallback for tool management.

A Windows 2000 Professional PC was configured as a CVS file server. Developers used either WinCVS or the Cygwin command line clients for connecting remotely to the server. Since the CVS repository files were not exposed to the network, the were protected from accidental damage by browsers.

There were several projects running in parallel. The major projects used quite a bit of purchased code (several million lines). The CVS repositories had to be carefully designed to support different maintenance rules for purchased versus locally designed code as well as to support modular portability from project to project.

To make it easier for developers of varying skill levels to keep track of changes to code bases and versions an Apache server was added to the CVS server PC. Web pages were added that allowed developers to easily access change logs and to see who was working on what. I designed an automatic build system that was triggered whenever new code was checked in. It also automatically tagged an build code nightly so we could track our progress. Only properly tagged and build versions were passed to the system test organization.

To keep track of bugs and fixes, a Linux server was added that ran Bugzilla under Apache. Developers and system testers could enter bug reports from the web interface and team leaders and managers could dispatch assignments as well. I developed a bridge between the Bugzilla bug database and the CVS change database that annotated bug reports with lists of files and commit comments associated with the bug fix. It was also possible to prevent code commits if the developer did not have an appropriately assigned bug.

I received a Peer-to-Peer award from Jedai. I value Peer-to-Peer awards greatly since in the final analysis, if you can't get along with and support your peers, you cannot succeed, and your project and product cannot succeed.

SoftModem Re-Architecture for Software Manufacturability

Prior to working at Jedai I was a consultant through Tropaion. My last project through Tropaion was for the Elemedia organization of Lucent Technologies. The project was to produce a working V.90 modem with the source code primarily written in the C language instead of DSP assembly language. The intention was to allow the product to be easily ported between processors and operating systems.

As I entered the project, the basic architecture of the code existed. It was based on the C algorithm simulations created during the development of DSP-based modems. Unfortunately, developed in that way meant several different coding and debug methods were used. The code was tested with a loopback simulation on a Windows PC. There were two simulations supported. One had a C++ GUI which displayed constellation diagrams as the modem ran. Another simulation ran from the command line and just produced data for analysis. A port to VxWorks on PC hardware had just started and the controller portion of the modem was being ported to the data pump.

In order to be profitable, the modem had to be very easy to implement and install by our customers. They should not get the modem as source code since that would require a lot of work on their part to understand, build, and test the code and would cost us a lot to support their efforts. In addition, it would be difficult to allow a customer to simply try it out since we would be exposing all of our source code to them.

To simplify delivery I decided to change this to a binary product much in the way the pSOS was delivered. The binary was simply loaded into memory on the target and the customer's application would just jump through know locations in the module to perform the interface functions. A simple C example program would be used as the model to do their own ports. The result would be a compiler and operating system independent module. It would cost us very little to support. We would port from processor type to processor type, but we would not have to worry too much about the customer's operating system and tool choices.

Unfortunately, the organization of the code made that a long term goal. There were too many external references. We made an interim goal of producing binary library product instead. This was also shipped with example code that the customers could build their applications from. The downside is that we would have to build a different library package for each compiler choice for each processor.

Having made that decision, the code needed to be reorganized. The code had a lot of static structures that affected re-entrancy. These had to be identified and the code around them needed to be re-structured as well. The interfaces to the application had to be simplified and formalized. Calls to 'standard' libraries had to be eliminated where possible since not all compiler libraries will provide all the functions that were in use, or they themselves may have re-entrancy problems that would not meet the requirements of our product.

By the time the project wound down, we had restructured the code per my specification and ported it to work for the VxWorks architecture and tools. Libraries were created that supported the Pentium, and the Hitachi SH3-DSP and SH4 processors. I had created a build system that allowed simultaneous creation of all of the target libraries and distribution CDs. I designed and documented the reference designs so that customers could port them to proprietary targets in a matter of days.

Of course, the first customer not only did not use VxWorks, but the did not use a compiler tool chain compatible with our libraries for the SH3-DSP. As a result I had to make a two-prong attack to keep the customer happy. First, I had to port the code and build procedures to use the Hitachi compiler tools and to produce equivalent libraries. This was difficult because code that function properly with one tool chain did not work with the other. Optimizations that ran fast with one did not necessarily work with the other.

The second prong of the attack was to create the relocatable binary as I had originally intended. This was built from the standard SH3-DSP tool chain used for our VxWorks targets. I took one of the standard demo applications and converted it to change our function-call interface to a jump-table interface. The result was compiled and linked with no external references. An additions C file added to the old sample applications converted the jump-table interface back to our published API. After this was completed we were able to drop the binary in our first customer's target, under their operating system and execute flawlessly.

Real-Time Performance Analysis

A modem requires a lot of computations for operation. It is very difficult for a modem written substantially in C to execute within the limits of conventional embedded microprocessors (available at the time). There are two types of problems you could encounter: first, the program could simply be too fat and will never work with a CPU at the given speed. For instance, the code may require a constant 200MIPs, but the processor only provides 100MIPS.

The second problem is more subtle and is related to the sample-driven nature of the application. The hardware does analog to digital conversions at a predetermined rate. This produces a stream of samples to the modem input. The modem analyzes the input sample stream and calculates new digital values to be sent out the phone line. In a simple case the hardware would interrupt the application once each time a sample was available. The modem would be in the interrupt service routine. The modem would crunch the input sample, and then write the output sample to the hardware, and then return the CPU to the application. If the modem took too much time processing a sample, it would not produce the output sample before it was needed. This would distort the transmitted signal, and probably would terminate the data call.

The modem is a state machine. Different states require different numbers of MIPs. It is quite possible (as we discovered the hard way) for the modem to require and average of say 50MIPs as measured in a loopback simulation, but it would still crash on the real target. Most of the time the modem had plenty of time to complete its tasks during a sample period, but occasionally it didn't. This type of problem was very hard to find analytically. Initially, people went around 'speeding things up', but in many cases it was a wasted effort.

To get a better handle on the problem, I ported the loopback simulation (that ran under Windows) to one of our targets. I made the whole thing run as a single task. Next, I took advantage of the way the GNU compiler (that comes with VxWorks) operates. It generates assembly language code from the C code and passes it to the assembler. I modified the makefiles so I could selectively stop compilation prior to assembly. I then wrote a collection of shell and C programs to process the assembly language, These tools could find the starts and ends of C functions. The tools added a few lines of assembly language that wrote a tag number and a processor cycle count to a software trace buffer. The resulting assembly language was assembled and linked into the loopback simulation.

As the modem ran, each time an instrumented function was called or returned a queue entry was created. The modem was allowed to start up and get into the normal operating mode. The simulation was stopped, and the trace buffer was transferred to a PC. Post-processing tools were used to generate a report that gave us sample-by-sample reports of how many CPU cycles were required to process each sample. When plotted with Excel, we found that very deterministic instances where the modem required much more CPU cycles to complete a task than the processor could provide. In some cases these CPU spikes were in the thousands of MIPs. By inspecting the function-by-function CPU utilization statistics for that block, the MIPS usage could be easily identified.

As a result, problems that were unsolved for weeks, were identified and surgically repaired in days in most cases.

print CGI=HASH(0x812c000)->h4( SoftModem "Agent" Development

As I mentioned before, we provided sample applications on several different boards with the different supported processors. These we referred to as Agents. For each of these I wrote application notes with the theory of operation for the particular example. This went along with our API document to aid them in their designs.

Most of the agents were pretty obvious, the modem used a standard telephone interface on one side and a serial interface on the other side. A typical real application would have the application on the same board as the modem so the serial interface was a distraction. Most of the boards had Ethernet interfaces, so we build applications where the serial interface was replaced with a socket interface. We could then connect to the modem with HyperTerm or a data load generator and send data faster than would be possible with the serial interface.

CMDA Cell Phone

Prior to my SoftModem contract, I was a lead software developer on a CDMA Cell Phone project at a division of Lucent that ended up partnering with Philips. I was responsible for: