Application Development - Our Approach
For example, an LCD screen on the target may be emulated by a graphic of an LCD screen on the host machine. The code for drawing on the LCD can be the same on both the host and target, but the lowest level of the code is different. In the case of the target, it sets up some memory as an LCD frame buffer and draws directly into it. In the case of the host, it builds the image in an allocated area of memory and calls a routine to draw it in a window on the host.
Such techniques can also increase parallelism. Often the real target hardware is scarce, or even non-existent at the start of the project. With emulation techniques, several programmers can none-the-less be working on various aspects of the software, knowing that their code will work correctly on the final target. There is of course no getting away from the final port to the target. But this may be managed by a separate team, responsible for the emulation environment.
For actual target development, the right tools are essential. These include a GUI development environments and debugger, an ICE unit, sometimes a Trace unit, as well as the usual hardware tools such as oscilloscopes, signal generators logic analysers. A particularly difficult problem may require all of these tools hooked up to a single board, if only for a few hours.
This means that large menus, obscure commands and hidden function keys must be avoided, without making the application slow or cumbersome for the experienced user. There is a conflict between accessible functionality (shortcuts and options) and good first impressions (simple, clear interface). Between these two a balance must be found.
For any non-trivial application, we generally use a GUI toolkit, such as Trolltech's QT GUI Toolkit. This allows us to concentrate on the application rather than the widgets, and often saves significant development time and cost. An advantage of QT is that QT applications can run on many Operating Systems, including Embedded Linux and Linux, but also Windows and Mac OS. As portable devices can now have very similar functionality to PCs, this can be a significant bonus for some projects.
Embedded applications often have to interface to hardware, possibly through OS drivers. They may need to perform signal processing or amalgamation on incoming data, or generate output based on other input. In many cases, the data are time-critical, meaning that the data processing must be handled in a high-priority thread or separate high-priority application. At the same time, the unit must be responsible to user input.
With the right architecture these problems can be overcome. In one case we designed software which coped with a 50KHz interrupt, without any special treatment in the Operating System. With the ARM architecture, much higher interrupt rates are possible using the Fast Interrupt (FIQ) facility.
Testing is probably the most important part of embedded application development. Unlike desktop software, embedded software generally needs to be 100% reliable, since there may not be a friendly user available 24 hours a day to reboot, rebuild or fix any problems.
Testing a non-trivial application must not degenerate into manual effort. This might seem the best approach for the first release, but it rapidly becomes tedious and time-consuming as changes and enhancements are made along the way. In general, we make use of automated methods where possible.
Testing may make use of the following techniques:
- Unit testing, where the individuals units (or modules) within an application each has a test program. This ensures that the functions with each unit are correct. Several units may be tested together where there are dependencies. Unit testing speeds up integration significantly, since known-working units are integrated, rather than a bunch of untested masses of code.
- Simulation testing, where simulated stimulus is provided by a testing module, which then checks that the output is correct. This can be used for testing major algorithms within the software. The stimulus may come from other software, or from manually-checked output of a previous revision of the code. Simulation testing is particularly useful when the input data is hard to recreate consistently each time, such as the input from an audio codec.
- Regression testing, where previous manual test failures or reported bugs are turned into tests and added to the test suite. This stops already-fixed problems from returning, and may show up other problems as development progresses.
- Release testing, where a release is made and submitted to a test team, or the client, for manual testing. Time is spent immediately fixing reported problems where possible, to produce a relatively stable release. This can avoid the problem with a long development, where none of the code actually works and an enormous amount of debugging suddenly becomes necessary at the end.
- Integration testing, where a test controller talks to the various elements of the system, sending data to one, and checking the results in another. This is a high-level test, which is best built into the software from the start. Integration testing is potentially very powerful since it makes few assumptions on how the data are processed, and checks only that the correct results are obtained. Integration testing can also be useful for sending in bogus data and checking that the application copes correctly with this.
- Manual testing, finally, for which ultimately there is no substitute. This can catch unforseen problems with unexpected user input or hardware input.
Often the biggest obstacle to automated testing is the GUI. Although automated testing of GUIs is possible, it is only worth the effort for fairly large GUIs, which are not very common on embedded systems. By isolating the GUI from the rest of the application, we can ensure that the application is mostly tested with automatic systems. The manually-tested GUI can then be placed on top with reasonably confidence that everything is correct.
GUI testing can also refer to usability testing. It is very hard to get a user inteface perfect on paper, and tweaks are often needed when the application is feature-complete. For this reason, a GUI mock-up is often created in the specification phase. This can be adjusted and altered throughout the project, and slotted in at the end when everyone is happy.
As the application nears completion, releases are generally made to the client and to internal testers. These releases help to identify problems, and narrow down the work required to finish the project. These releases must be performed largely automatically, so that they are repeatable. If human error can easily ruin a release, then it becomes hard to tell whether the application is at fault or the release. For complex systems, manual releases may be all but impossible.
Typically a simple shell or perl script is sufficient to automate a release. Various tests can be automatically run on the release before it is sent out.
It is common practice to have a 'build machine' which, starting from a blank directory each night, builds all the source code and runs all the tests. It then sends an email to the project manager in the morning with a list of any problems found. This becomes particularly important as the project draws to a close, where a 'code freeze' or other stabilisation method may be employed.
Successful embedded application development requires engineering discipline and organisation. Apart from the obvious software efficiency requirements with respect to memory and CPU usage, there are reliability constraints which substantially influence the development approach.
With the right development and testing strategy, the problems can be overcome and a functional and reliable application can be created.