Posted in Hardware design on September 10, 2008 by Daniel Keis
For designs to become reality, they need to go through the prototype stage. Very few designs of high complexity will work the first time, hindsight is normally 20:20. Since most of the work performed by us utilises high-density miniature components on multi-layer boards, we have developed techniques to rework boards in such a fashion that we can test our changes before producing the next revision. Prototype designs often include provision for modification and adjustment through resistor fitment options. If this is not the case then PCB tracks need to be cut and wires placed to rewire the tracks. These are normally kept as short as possible and fitted in such a way that no components are covered, allowing further rework to be carried out if required. Solder mask can be scratched off to allow connection to otherwise normally non-accessible areas of the board. Chip legs can be lifted without removal of the pa to allow reconnection of both pin and pad if required. In extreme cases (very rare) we have replaced components with different footprints and pin outs and got these working before using them in the next revision. As these required modifications are not known before going to manufacture, some demanding but interesting modifications have been made without any fear of harming the prototypes.
Posted in Hardware design on September 08, 2008 by Russell Hocken
Bluewater has been involved in several projects with the goal of replacing a legacy system because either the original supplier was no longer available, parts had become non-existent, or simply for a reduction in maintenance costs. Typically the systems are relatively simple, but without any technical details, they can be difficult to divulge. Because the systems predate the advent of the Internet, interface details and user manuals are quite difficult to come by. We have performed this type of reverse engineering task when developing our DDS system, which we designed to replace legacy tape drives in telephone exchanges. The telephone exchange that needed replacing, as a whole, continued to function correctly and fulfill its fundamental tasks, but the maintenance costs of the tape drives was becoming prohibitive. For the DDS system, Bluewater was informed that there was a single interface called Pertec. An Internet search yielded a pinout and limited protocol information. Upon beginning investigation at the customer's site, it was discovered that there were in fact 3 very different interfaces. Pertec, Kennedy and a custom cartridge tape interface that may well have a standardized name but no one could tell us! The DDS uses a Snapper 255 CPU module. The FPGA on the Snapper 255 was invaluable in allowing us to alter the 'hardware interface' to suit these varied interfaces as we came to understand their nuances. The process is a tedious one. Using a high-speed logic analyser and custom analysis software we would perform an operation on the original tape unit, then repeat the same operation on our unit and look at the respective waveforms. From this, we were able to determine what was significant and what was inconsequential. We then would modify our system to match the original system and repeat ad-infinitum (or so it feels when doing so). The goal is to slowly bring our system up so that it matches the original in all possible usage cases. The obvious downside of this is that we only replicate and verify functionality that we can exercise. If the original system is configured slightly differently, then we have to expand our model to match this. We have systems working flawlessly for years with one customer, but for any new customer, we prefer to visit in person and verify their particular set-up. It's a very time consuming task - and not a very glorious one at that. But at the end of the day, we have replicated legacy systems with only limited details and given them a new lease on life. Along the way we added more modern features such as solid state storage, lower power consumption, remote access and control. Whilst working on these projects in recreation I often have a thought for anyone who - in 20 years time - may have to reverse engineer a SATA or PCI-E interface. Hopefully, documentation for these will still be around and in more detail for them, as what we have had to work with has been next to nothing.
Posted in Hardware design on September 05, 2008 by Andre Renaud
A while ago we developed a simple AT91SAM7-based board, which was to be used to make smart peripherals to off-load processing in a system with a vast array of different inputs. This worked well at the time, and as it was so simple, we have tended to pick it up for all kinds of small prototype tasks. The availability of a platform, which has a good amount of peripheral control, but which can be easily prototyped into all manner of different situations has been of great advantage to us. It can be plugged into standard 2.54mm headers, and thus easily attached to various headers. In particular, we have modified it to work in the following situations:
- RF link controller - translating from a standard RS232 port to the bit-wise signal required to transmit over a 2.4GHz RF link
- Auxiliary test controller - under control of a PC (via its USB device port), we've used the EKit module to test other modules - checking voltages, driving inputs, communicating over peripheral buses etc... This allows for automated testing of our various modules.
- Comms controller - We used it as the central CPU in an RS422 industrial communications unit, controlling a variety of audio amplifiers, input switches and display LEDs.
The re-use advantages are quite apparent to us when picking up a new system, as we already have the infrastructure, software base and peripheral drivers in place, and can really hit the ground running for a new design. Especially given its ability to be prototyped onto a simple solder-less breadboard, this means we can quickly mock-up our system in both software & hardware.
Posted in Uncategorized on August 21, 2008 by Daniel Keis
Three engineering students were gathered together discussing the possible designers of the human body. One said, "It was a mechanical engineer. Just look at all the joints." Another said, "No, it was an electrical engineer. The nervous system has many thousands of electrical connections." The last one said, "Actually it was a civil engineer. Who else would run a toxic waste pipeline near a recreational area?"
Posted in Industry News on August 18, 2008 by Administrator
It has been a bit over two years since the Nvidia license was announced. At the time we were promised high definition video and high performance chips with a graphics focus. It takes a couple of years for a new licensee to get their feet under the table, so it's a good time to have a quick look at the results. This video shows some of the technology at work: It certainly looks compelling. From the specs, it can drive an large HD display as well as a Blu-ray player. Another video is (HTC Touch Diamond) showing the Opera browser. Nvidia has recently named their ARM line 'Tegra'. This is a 700MHz ARM11 MPcore (multi-core CPU) with high-end graphics capability. Support on the web site looks weak, and it seems to be aimed at Windows. The HTC Advantage is one consumer product I can see with this technology. It uses Windows too. The target market seems to be tier 2/3 Asian manufacturers of electronic gadgets. Nvidia sees its competition as the Intel Atom, but this is a strange choice. Sure, Tegra looks great against the Atom - 1/10th the size and 10th the power, they claim. But most ARM SOCs look good against the Atom. How does it stack up against TI's OMAP3530, for example? Surely this is the real competition - no one in their right mind is going to put Atom into a small mobile device. The focus on Windows perhaps just reflects the market they are in - as reported on Linux devices:
Asked about future Linux support, NVidia Spokesperson Andrew Humber replied, "Market demands will always dictate the direction in which we take a product, and this is true of all of NVidia's businesses. However, at this time we are focused entirely on Windows Mobile and Windows CE."
This is in strong contrast to both Intel and TI, who go to great lengths to support Linux on their products. Is Nvidia persuing the right strategy here? There is no need to argue with Intel about the merits of using ARM versus x86 in mobile devices - that argument was surely won years ago. And the focus on Windows (while it has advantages in concentrating Nvidia's limited resources into a single platform) plays into Intel's 'we are x86' story, and allows people to pidgeon-hole Nvidia into a customer base with limited software expertise and little chance of producing an iPod-killed. Could Nvidia aim higher?