Posted in Hardware design on May 20, 2010 by Russell Hocken
In drawing up the schematics for a new design, it is very easy to add new components willy nilly as and when required. Then - once happy with the design - move on and produce a masterpiece of a PCB layout. After the PCB layout send the files out for quote and move on to purchasing components. It's at this point that you may find yourself confronted with a large BOM (Bill of Materials) to buy before you can build your board. The time to find and buy these components is not the end of it either. A large BOM has the following drawbacks:
- The more discrete part types in a design, the lower the volume of each will be, and the cost benifit of buying in bulk can not be realised
- The inavalibility of any part can stall a build. The more parts in a build, the more likely this is to happen
- More parts means longer machine setup times and more feeder requirements - possibly even a double pass through the machine. These all add to build cost
- More components to track and keep inventory of which costs time
On most boards there are a variety of very simple things that can be done for very low cost to mitigate these problems - the initial cost will more than pay for itself in reduced future costs and reduced risk.
The BOM rationalisation step should be performed after the schematic phase - NOT after the PCB phase. Often BOM simplification requires trivial changes in the schematics, but once the PCB layout is done changes may result in a lot of work needing to be redone.
There are many things that can be done to reduce the BOM count. Resistors and Capacitors are the simplest place to start: often there is a wide range of these in any design and these are by far the easiest target. There will often be many components of which there is only one or two used. Several things can be done in this case:
- Check if the odd-ball specific value is actually required, or can it be replaced by a more common component on the design. Pull-up resistors are a common culprit here
- If the value is required, see if it can be produced by a parallel or series combination of common components. It is quite suprising how often this technique can be used.
- Voltage dividers. These can be a special case of the above technique. In one recent design, it was discovered that a 100k, 120k and 330k could produce over 60 different divider ratios with one or two high side resistors, and one or two low side ones. The spread was quite even too.
Power supplies can often be simplified too:
- Using an adjustable version of a power supply, and tailoring it to several different voltage rails will use fewer different ICs than using the pre-programmed ones. When it comes to production, a shortage of any IC could put a halt to productio, so the fewer different IC types, the better
- Adjustable supplies normally have a voltage divider to set the output - take advantage of the voltage divider options as discussed above. Remember that a 3.3v supply doesn't need to be 3.300000000v: 3.27453v is more than acceptable for the vast majority of systems!
- Inductors can often be reused on switch mode supplies even though it is a different voltage rail. It may sacrifice a minor % of efficiency, or on the surface cost a few cents more - though doubling the volume will more than likely cancel this out.
- Don't be afraid to understand and modify the 'webbench' recommendation. TI, National, etc all have really easy to use web based tools for designing power supplies, but don't take the design they provide as the end of it. There is scope for adjustment and fine tuning to suit your particular board
- Some switch mode ICs can be configured as a DC/DC converter, or as a current source for driving an LED backlight. Look into these options at design time and use the same switch mode IC for both
Similar components can often crop up in designs. If there are multiple different types of MOSFETS, BJTS, or diodes these can be reviewed to ensure that there is a good reason for them to all be different. This is most often a problem when portions of several designs are combined to make a new one.
Use configurable logic gates - TI for one make some small ICs (SN74AUP1G57 for example) that can be wired up as an AND gate, an OR gate, or some other odd-ball gate. These can have their advantages if there are several different gate types in the design that can each be replaced by a different version of the single configurable gate.
Consider for some applications using transistors and resistors to replace a one-off logic IC.
There are many more things that can be done. The majority take very little effort at the schematic phase. It is easy to be of the mindset that that 12.7k resistor only costs a few cents, so why bother removing it, however the issue is not the cost of the component - it is the cost of supporting the component! The 5min to design it out pales when compared to the what it would cost to support it through the lifetime of the product through purchasing, stocking, machine setup, and risk of it being temporarily unavailable.
Here at Bluewater we strive to keep BOM counts down and this approach has produced dividends both in simpler prototype and production builds. The benefit is easier stock tracking and reduced time spent on component managment.
Posted in Hardware design on February 20, 2009 by Russell Hocken
FPGAs are great devices for performance and functionality. FPGAs are primarily made up of flip flops, LUTs (Look Up Tables) and interconnect matrix. Additional to this there are often block rams, DSP blocks, fast adders, etc. LUTs themselves however offer some often unrealized design features. In most Xilinx FPGAs, the LUT is a 16x1 configuration. i.e. there are four lines in and 1 out. Internally there are 16 elements, each representing the output state for a specific input combination. Many people leave it at that and let the synthesis tools decide how best to use them. However upon closer inspection, the LUT can operate as a shift register through all 16bits, with a carry output and a programmable tap.
-- see http://toolbox.xilinx.com/docsan/xilinx7/books/data/docs/lib/lib0370_356.html component SRL16E generic(INIT : bit_vector := X"0000"); port (D : in std_logic; CE : in std_logic; CLK: in std_logic; A0 : in std_logic; A1 : in std_logic; A2 : in std_logic; A3 : in std_logic; Q : out std_logic); end component; Encapsulated in modules I have made LUTs into the following * Small efficient fast counters: A single LUT can count (cycle) through 16 states. If the overall output of this is used to enable a second LUT counter, then overall one can count through 256 states. Three combined can do 1024, etc. This uses a fewer resources than if the counter were implemented in flip flops and also has the advantage that the fan in/out is less so can operate faster. To implement an arbitrary counter, a recursive module is used which takes k - the delay length and breaks it into k1*16+r1. k1 is a shorter counter that cycles 16 times, and when that is done triggers the final difference r1 (a short shift register). K1 then has the same algorithm applied to generate k2*16+r2, etc until kn is 0. * Delay paths: Delay paths are simply shift registers. Depending on the delay path, a counting implementation above may be used if only one thing can be in the path at a time. * FIFOs: the LUT shift registers form the basis of FIFOs. The FIFO length is the four address lines for the LUT * Control logic: With various control logic, the timing interdependencies can be programmed into a shift register. These implement very efficiently in a LUT. The shift register can be cyclic so it repeats forever. This has been used in an I2C controller where two parallel shift registers were used, one for the data and one for the start and stop bits. The cycle was initiated by a LUT based counter. Similar control logic has also been used for SDRAM timing which requires low latency and fan in/out. This LUT functionality can be hidden inside a module and if the code is moved to an architecture which doesn't support the LUTs being used as shift registers, a different implementation can be used
Posted in Hardware design on January 05, 2009 by Russell Hocken
Many devices are battery powered, and there is a wide range of battery technologies to choose from: Nicad, NiMH, Lithium, Alkali, etc. They all behave differently and have different caviets. We have designed many products to run off battery power. Often the battery is fixed for the product, however sometimes it is desirable to use a standard battery - such as an AA for example. The problem is there is no guaranteeing which AA a user will use. Nominally an AA battery is 1.5v and the current capacity can vary from 600mAh to 3000mAh. Furthermore rechargeable AA batteries are often 1.2v. A fresh Alkali AA battery can read as high as 1.7v. 3.3v is a common voltage for digital electronics, and it is quite tempting to connect up two AA batteries in series to produce 3.0v and assume that is the power supply sorted. Unfortunately, given the range of AA 'voltages' indicated above, and combined with the fact that battery voltage decreases based on load and capacity, this approach is not acceptable. What needs to be done instead in such a system is to have a boost converter to generate the system 3.3v. On a recent design we used a Linear Tech LTC3539. This works over an input range of 700mV to 5v and can provide up to 2A. If the input is above the system rail - as can happen with some fresh AA cells, the converter still regulates the output voltage (albeit not as efficiently). If the system rail is 3.3v, then the batteries will quickly fall beneath this threshold, so the power loss due to this is not great. If such a chip is used to generate a lower voltage rail, then system operating life will suffer more. Using such a chip to generate a 3v3 rail provides maximum battery life for the system and ensures the majority of energy is drawn from any AA cell before the batteries are 'flat'.
Posted in Hardware design on December 03, 2008 by Russell Hocken
Here are some quick FPGA pointers
- Be very clear on what sort of reset, asynchronous/synchronous you want. An asynchronous reset in clocked logic can result in your whole design becoming one big metastable input
- Often a reset is not required. Most FPGAs upon start up can have their initial contents specified for signals (i.e. signal bla : std_logic:='1';). Doing this removes one signal form almost every LUT in your design, removes the net with the biggest fanout and removes other reset timing problems. Don't feel as though you NEED to have a system reset pin. This can be done through the FPGA programming interface if really required (A well designed system and FPGA should not really need a reset).
- Understand metastability (That is where a changing asynchronous input is sampled and its result can stay in an undetermined state). The undetermined state is an obvious problem, however the instance where due to logic structure the 'same' signal actually resolves differently in two different places in the design on the same clock edge can cause all manner of problems. This is easy to accidentally do as in:
if rising_edge(clk) then if b='1' then a <= a+1; end if; end if; If b is asynchronous, then a may take on some random value if b changes close to a clock edge.
- Specify timing constraints: Especially on clocks and any timing critical IO. If you don't your system may not meet timing and you'll not realise and spend a long time tracking down odd behaviour.
- If you don't care what a signal is, then say so. The implementation tools can take advantage of this and specify what would result in the fastest or the smallest implementation. i.e.
if b='1' then a <= X"01"; elseif c='1' then a <= X"10"; else a <= "XXXXXXXX"; end if; end if;
- Count to a predefined number, initialize to a arbitrary one. Often this means counting down to zero. This results in a smaller fan in on the comparison and thus lower propagation delay, and higher speed.
- Clock as slow as you can. This makes your life easier and reduces power consumption.
- Pipeline. Pipelining improves thruput. Its harder to think about, and does have complications, but can make higher clock speeds easier.
- Plan your pinout BEFORE committing to a layout. Too often the FPGA firmware developer is badgered into picking a pinout so the hardware can be built. RESIST! There is nothing more annoying than trying to make a design work with pins, clocks in the wrong place on the package so the propagation delay is increased. Pay particular care with VREF, VRN and VRP pins. Realizing you've missed these can ruin your day. Ideally generate a design with a proposed pinout to check for any problems.
- Where you can allow the PCB layout guys to swap pins. It is equally annoying for the PCB layout engineer to fan out a mess of an FPGA pinout which when all is said and done is arbitrary. You'll have better tracking, better decoupling, better signal integrity.
- Reduce the number of signals contributing to any other signal. This reduces fan in, fan out. Thus reducing propagation delay and power consumption.
Posted in Hardware design on September 08, 2008 by Russell Hocken
Bluewater has been involved in several projects with the goal of replacing a legacy system because either the original supplier was no longer available, parts had become non-existent, or simply for a reduction in maintenance costs. Typically the systems are relatively simple, but without any technical details, they can be difficult to divulge. Because the systems predate the advent of the Internet, interface details and user manuals are quite difficult to come by. We have performed this type of reverse engineering task when developing our DDS system, which we designed to replace legacy tape drives in telephone exchanges. The telephone exchange that needed replacing, as a whole, continued to function correctly and fulfill its fundamental tasks, but the maintenance costs of the tape drives was becoming prohibitive. For the DDS system, Bluewater was informed that there was a single interface called Pertec. An Internet search yielded a pinout and limited protocol information. Upon beginning investigation at the customer's site, it was discovered that there were in fact 3 very different interfaces. Pertec, Kennedy and a custom cartridge tape interface that may well have a standardized name but no one could tell us! The DDS uses a Snapper 255 CPU module. The FPGA on the Snapper 255 was invaluable in allowing us to alter the 'hardware interface' to suit these varied interfaces as we came to understand their nuances. The process is a tedious one. Using a high-speed logic analyser and custom analysis software we would perform an operation on the original tape unit, then repeat the same operation on our unit and look at the respective waveforms. From this, we were able to determine what was significant and what was inconsequential. We then would modify our system to match the original system and repeat ad-infinitum (or so it feels when doing so). The goal is to slowly bring our system up so that it matches the original in all possible usage cases. The obvious downside of this is that we only replicate and verify functionality that we can exercise. If the original system is configured slightly differently, then we have to expand our model to match this. We have systems working flawlessly for years with one customer, but for any new customer, we prefer to visit in person and verify their particular set-up. It's a very time consuming task - and not a very glorious one at that. But at the end of the day, we have replicated legacy systems with only limited details and given them a new lease on life. Along the way we added more modern features such as solid state storage, lower power consumption, remote access and control. Whilst working on these projects in recreation I often have a thought for anyone who - in 20 years time - may have to reverse engineer a SATA or PCI-E interface. Hopefully, documentation for these will still be around and in more detail for them, as what we have had to work with has been next to nothing.