FPGA Developemnt - Our Approach

PGA development is largely about partitioning. Should a function be implemented in hardware or software? The answer to this question depends on many factors, and arriving at the right answer is key to a successful design.

In theory (thanks to Alan Turing for this) it is possible to make an FPGA implement an entire system, including all hardware peripherals (serial ports, LCD controllers, memory) and software functionality. It is in fact possible to build any design simply with a sufficiently large FPGA, a clock or two, and some I/O interfacing.

While the FPGA manufacturers would no-doubt be delighted if we took this approach, in practice, it rarely produces an efficient design.

At the other extreme we could use standard parts (CPU, UART, LCD controller) and build all of functionality we want in software. This is a common approach, and provides enormous flexibility. It is also very cost-effective.

The reason to want to use an FPGA at all is generally performance. As an example, imagine an I/O line which must be sampled at the rate of 1 MHz. It would be possible to use a software interrupt to handle this, but this would soak up significant CPU time. An FPGA could sample the I/O and store the results in a FIFO. The CPU could then read the results from this FIFO at the rate of, say, 1 KHz. This means that the FIFO would need to hold around 1000 bits, or about 32 words. This sort of logic is ideal for an FPGA, and CPU load would reduce significantly.

Partitioning such functionality between an FPGA and CPU software is done largely on the basis of need. If the CPU has plenty of spare cycles, then the CPU can do it. If the CPU is getting a bit loaded, then the FPGA needs to do it. The problem is that early on in the design cycle it is very hard to determine what the CPU load is likely to be.

This makes it difficult to decide where to place the required functionality. But once the circuit is designed, for example, with I/O lines going to the CPU, it generally requires a board change to make them instead go to the FPGA. This makes it prudent to err on the side of more functionality in the FPGA. If the CPU is found to be idle most of the time, then functionality can always be pushed back to the CPU later (with a board respin, admittedly). However, there is more chance that the system will perform as expected.

FPGA SELECTION67-consulting-fpga-macro
A similar problem arises in FPGA development with device selection. How many FPGA I/Os does a system need? How many logic elements are required in the FPGA to implement the required functionality?

These questions can be answered fairly accurately by a good feasibility investigation. To answer the first question, we could count the number of I/Os required to interface with the processor and any peripheral chips, add the number of external I/Os which must go to the FPGA, and then add a little for safety and late design changes. This generally produces a fairly accurate estimate.

In answer to the second question, we look at the functions we expect the FPGA to perform, with particular reference to memory-based functions, since these can use up a lot of space in an FPGA. For example, let us imagine we need a 2048-entry 16-bit wide FIFO, 10 UARTs, 5 counter/timers and a 16x16 multiplication engine connected to another 256-entry 16-bit wide FIFO. Most of these are standard parts which we already have or can easily obtain. Therefore we add together the logic element size of these to obtain a rough estimate, and add a little more for safety.

Taking this further, we can used advanced FPGA tools to build a prototype design, where the logic elements are connected to a 'dummy' top level module. This should give a more accurate logic element count. If time permits, the design can be prototyped on suitable development hardware to obtain a very accurate count.

Based on this information, the lowest-cost FPGA which meets the requirements can be selected. By choosing an FPGA in a family we ensure that we can move down to a lower speed, lower density and even smaller pin count device if we improve on our estimates.

We generally use Verilog or VHDL for FPGA development. We have built up a library of blocks which can be used in designs, and of course have access to tools- and vendor-specific libraries for many different functions.

For prototyping we have a number of Integrator boards with different features and options. For example, the Integrator /CM-XA10 board has an Altera Excalibur chip with several hundred I/Os and a million gates of logic. Combined with the Integrator's PCI slots, great tools support, an ARM922T core and an internal AMBA bus, this is a very powerful prototyping platform. We also have simpler boards based around specific FPGAs which can be connected to various ARM CPUs.

For cost-constrained designs requiring significant peripheral connectivity, Snapper can be a very effective prototyping platform.

Designs can be testing using FPGA simulator software such as Modelsim. This allows stimulus to be entered and the results inspected. Testing can also be performed on the prototyping platform. In this case, additional test lines can be set up which bring out internal signals. Given a suitable test port, a logic analyser can be connected to see the operation of the FPGA and to debug problems and check performace. A suitable software test program with a custom hardware rig can be useful for production test.

Once a design is working, it is reviewed to ensure that it is as small as possible. Sometime superfluous logic is present in the design, which is not obvious until the prototype is complete, or later. Removing ununsed or unnecessary functions can significantly reduce the logic element count of a design. Parts of the design may be parameterised and can instead be changed to fixed values. FIFO and other memory element sizes can be reduced. Bus widths can be narrowed and address decoding simplified. All of these changes may allow a smaller or slower FPGA to be used for the product board, thus saving cost.

As designs get more and more complex, and FPGA prices fall, FPGAs are bound to find increased use. The key to a successful FPGA-based design is not just clever coding for the best possible sythesis, it is also partitioning. A great design for a superfluous FPGA block is still worse than no block at all.