Quantcast
Channel: Electronic Design - Analog
Viewing all 582 articles
Browse latest View live

Samsung 1.8-Inch Disk-Drive Teardown

$
0
0

A look inside the drive brought interesting revelations in terms of the overall design, and unearthed a rather perplexing part.

I have two Samsung 1.8-in. USB disk drives that I use to back up all my work. They have a capacity of 250 GB. I kept one in the safe-deposit box and rotated it with the other one about once a month. Now I use 1-TB solid-state drives to back up my work off-site. I dug the Samsung drives out of the drawer to see if they still worked. One of them did, but only once. After copying a bunch of music files to the disk, it functioned for a few days, and then would just hang my Windows 7 laptop. For an engineer, it was a perfect opportunity to take something apart (Fig. 1).

1. The Samsung 1.8-in. disk drive uses two platters to achieve a capacity of 250 GB.

What I liked about the drive was its diminutive size (Fig. 2). Its smaller than a deck of cards, but would hold all my work. Being USB, I could take a drive into work in case I needed some old file from previous jobs. One reason I dug it out of the drawer was to see if it could serve music to my new Joying Android car stereo. Turns out the Joying saw it and played the music, but since I formatted this drive NTFS years ago, I may have bricked it when plugging it into the radio. Forums say that most car stereos want FAT32 file systems for external storage.

2. Samsung’s 1.8-in. disk drive has a USB controller built in and is powered solely by the USB connector.

The drive was mounted in the plastic enclosure with two slip-on rubber shock absorbers (Fig. 3). Consumer product manufactures seem to love Kapton tape, as evidenced by the small piece stuck to the USB connector housing.

3. Rubber bumpers cushion the disk drive inside its plastic case.

Elegant Guts

The guts of the disk drive are elegant and well-designed (Fig. 4). From left, the top housing is injection-molded, with a black damping pad in the center. The pad and some screws are tossed inside for the picture. The label is above it. Next is one of the rubber shock mounts. The top sheet-metal case of the drive is next, with the spindle nut and one of the platters above it. Next is the magnets and head assembly. Above them is the spacer between the two platters. Next is the bottom sheet-metal case, with the spindle motor still in place. To its right is an orange plastic parking component for the head, along with the other rubber shock mount. To the right of that is the PCB (printed circuit board) that mounts all of the electronics. It has a large hole and a corner missing, making the drive as thin as possible. Above the PCB is a metallized foil patch that connected to the shell of the USB connector and covered the USB bridge chip. At the far right is the bottom cover, with an insulating spacer that went in between the PCB and the drive. The black damping pad is underneath the spacer in its correct location.

4. Spread out, the pieces of the disk drive reveal precision manufacturing in a high-volume consumer product.

The PCB has vias stitched along its edges to prevent electromagnetic interference (EMI) from radiating out the edge of the board (Fig. 5). This is the side of the board that faces out. At the left is the connector that connects to the flying read head. The crystal and JM20335 USB-to-ATA bridge chip were covered by that metallized pad.

5. The PCB in the disk drive is shaped to fit.

The inside-facing side of the PCB holds a Texas Instruments TLS2309 controller chip (Fig. 6). This chip runs the spindle motor. Also on the top edge is the connector for the spindle motor. A large tantalum capacitor sits next to it, to supply surge current. At the bottom is the Marvell 88i8038 PATA (parrallel ATA) controller and read head interface. On the right edge is the USB connector. Below that, a blue LED illuminates when you plug the drive in. Below the LED is a voltage-regulator chip.

6. The bottom of the disk-drive PCB also has components and connectors.

The connector to the spindle motor flex-cable is well-designed (Fig. 7). A screw goes right in the middle of the connector to ensure pressure is maintained on the contacts. All of the contacts look to be gold-plated. The black insulating separator is in its design position in this shot. It may also serve as sound damping. In addition, it may be conductive enough to provide shielding for the spindle motor, which radiates EMI as it runs.

7. The four-pin connector for the spindle motor is a sophisticated design that connects to a flex circuit mounted in the case.

The spindle motor is epoxied in place to the sheet-metal case (Fig. 8). The read head flying arm and connector are made as a sub-assembly. This allows testing before final assembly. You can see the loop of wire that’s positioned between the magnets to make the head flying arm move. The magnets are rare-earth composition and very powerful. The flying arm was mounted to the case with a 3-pointed screw. My iFixIt precision screwdriver set had one of these bits, as well as minuscule Torx-head bits used elsewhere in the drive. The newer 64-bit kit looks even more capable.

8. The case mounts the spindle motor and the flying head and magnet assembly, as well as the connector for the head motor and signal lines.

There was a mysterious black plastic piece inside the drive (Fig. 9). The underside was vented to the atmosphere. But the internal cavity so vented seems to be sealed from the inside of the disk. It may be the white film on top is a permeable membrane that allows the air pressure inside the drive to equalize to ambient. The small white pad is also a mystery. Its captivated by the black plastic piece, but I could not understand what it would be used for. Leave your suggestions in the comments.

9. The function of the black plastic piece on the upper right of the case half is a mystery.


In Reversal, Wolfspeed Buys Infineon's Radio Frequency Power Business

$
0
0

Last year, Infineon agreed to pay $850 million for Cree’s power and radio frequency business unit, Wolfspeed. But American officials refused to approve the deal and, after weakly toying with fixes and compromises, Infineon waved the white flag and pulled out of the deal.

Now both companies have hammered out a deal in the opposite direction, following Cree’s vow to aggressively invest in its Wolfspeed business through the end of the decade.

On Tuesday, Wolfspeed said that it had acquired Infineon’s radio frequency power business for about $430 million, bolstering its catalog of power amplifiers used in wireless infrastructure and radar. That includes chips manufactured with gallium nitride layered onto silicon carbide to handle the wider bandwidths and higher frequencies of 5G.

“The acquisition strengthens Wolfspeed’s leadership position in RF GaN-on-SiC technologies, as well as provides access to additional markets, customers and packaging expertise,” said Cree’s chief executive Gregg Lowe in a statement. The deal “positions Wolfspeed to enable faster 4G networks and the revolutionary transition to 5G.”

The deal encompasses Infineon’s factory for LDMOS and GaN technologies located in Morgan Hill, California. The facility also includes packaging and test operations. Wolfspeed will also take on around 260 employees from the Neubiberg, Germany-based Infineon, including more than 70 engineers.

For Wolfspeed, which generated $221 million of revenue last year and has shipped more than 15 million devices since Cree began targeting compound semiconductors at applications other than lighting, Infineon’s business is expected to add $115 million to its balance sheet in the first twelve months after the acquisition.

Firms Need Engineers, But Resist Paying More to Get Them

$
0
0

The engineering job market is tight. But many companies are somewhat unrealistic about the importance of offering a competitive salary, which isn’t going to get them top talent.

It’s probably no surprise that the job outlook for engineers is positive. In a January Bureau of Labor Statistics report, employment in the electrical and electronics space is expected to grow seven percent by the year 2026.

To capitalize on the country’s economic growth, organizations are expanding and looking for more workers to fill traditional roles in addition to new and emerging positions. As such, the engineering market is experiencing a lower unemployment rate compared to the national average, with tens of thousands of jobs expected to be added in the next five years.

At Randstad, we see demand rising for a diversity of engineering skill sets, where competition for candidates is getting fierce:

●      Validation Engineers: While the overall job outlook for validation engineers has struggled in past years, demand for them is growing. In fact, the projected total employment for validation engineers is expected to top 194,000 in 2018.

●      Controls Engineers: As automation gains traction in manufacturing and beyond, controls engineers are increasingly in demand.

●      Robotics Engineers: In a recent poll, 81 percent of senior executives surveyed cited robotics as one of the top five industrial sectors that will hire new workers through the end of this decade.

●      Embedded Engineers: Increased demand by consumers and businesses for more connectivity and smarter, more power-efficient electronic technology is driving the demand for embedded systems engineers. Especially sought out are embedded developers with not only the required coding expertise, but also a deep understanding of how software and hardware interact and communicate. 

Almost across the board, it is proving extremely difficult for companies to fill open engineering positions today. HR decision makers report the average time to fill a non-executive position is 2.6 months and five months or more to find leadership and executive talent

A key challenge: Many companies are somewhat unrealistic about the importance of offering a competitive salary. Add in the desire for qualified talent with exacting skills and the right cultural fit, and the hiring challenges increase exponentially.

Engineering salaries vary widely, based upon local market conditions and position-specific requirements like experience levels, job title, professional certifications, and knowledge of specific software tools. (Randstad is the source of the salary figures below.)

Nonetheless, many companies seem to put greater value on training opportunities and work-life balance programs than monetary compensation. While these kind of indirect benefits can be attractive, they alone are not going to attract the best talent.  The best engineers know what they’re worth and will hold out for a commensurate salary.

Furthermore, factors like an aging workforce and fewer engineering graduates are creating a more competitive talent market. Companies are in particular need of engineers with experience under their belt, but this high demand means candidates have their pick when it comes to the best job offer. In fact, year after year, engineering tops the list of majors with the highest average starting salary.

The market for engineers is only going to get tighter. It will especially be a candidate-driven market, with 65,000 new engineering jobs expected to be created by 2024. Click here to access all of Randstad's 2018 salary guides.

 

 

 

 

 

11 Myths About Wireless

$
0
0

We take wireless technology for granted, even though it is basically “magic.” And that perceived magic has led to myths and fallacies that need to be dispelled.

Wireless, or radio if you prefer, is a strange and wonderful phenomenon. Voice, music, video, and data miraculously move almost instantaneously from one place to another invisibly through the air. How could that be? Our entire environment is an invisible fog of thousands of electromagnetic waves. The whole phenomenon has been amazing to me since I was a kid. Even though I understand it I am still in awe of the technology.

That said, wireless technology is a complex subject. It has taken me most of my life time to learn it. And I still don’t know it all. But to non-wireless engineers, radio must seem an enigma. There’s much to get accustomed to and understand. What follows are 11 myths about wireless you may not know but should.

1. Wireless was invented by Marconi.

No, it was not. I would give my vote to Heinrich Hertz, who should get more recognition for his earliest demonstration of the concept. But we do use his name as the unit of frequency measurement. As for Marconi, he was a major contributor to the technology and is probably best known for putting the theory into practice. Marconi engineered the early radio equipment and demonstrated its capabilities. The real inventor of radio was Tesla, who did little to advance the science beyond a few clever demonstrations. Tesla was posthumously awarded the U.S. patent in 1943.

2. The Federal Communications Commission is the primary communications regulator.

The FCC implements the rules and regulations regarding most commercial and personal wireless products and applications. They manage the spectrum and define all kinds of guidelines like power, antennas, bandwidth, modulation, and interference. But they aren’t the only U.S. regulatory agency. The other agency that most of you have not encountered is the National Telecommunication and Information Administration (NTIA). The NTIA is the manager and regulator of all government and military wireless spectrum and equipment. It’s a division of the Department of Commerce. They work closely with the FCC to rule the airwaves.

3. Radio waves work like magnetic induction.

Not so. A radio wave is really a combination of an electric field at a right angle to a magnetic field. The two travel together in a direction perpendicular to both fields. As they propagate from the transmitting antenna to the receiving antenna, they stay together. Essentially the fields break away from the antenna, or radiate, and then actually support and rejuvenate one another along the way. The math describing that process was spelled out as far back as 1873 by James Clerk Maxwell. This signal that’s radiated is called the far field. It’s the real radio wave.

The field close to the antenna, typically within one wavelength, is called the near field. Transmission is more by magnetic field than by combined magnetic and electric fields. The near field signal is non-radiative. The near field is really inductive coupling that occurs between the primary and secondary windings of an air core transformer. The near field isn’t the real radio wave.

4. The propagation of a radio wave is basically the same for all wireless applications.

No way. Radio signals act differently depending on their frequency. Low-frequency signals in the 50- to 3000-kHz range travel by ground or surface wave. The vertically polarized signal hugs the ground and is mostly dissipated after a few hundred miles.

AM broadcast stations represent one example. Signals in the 3- to 30-MHz range travel by sky wave. The signals essentially are refracted by the ionosphere back to earth. Depending on the angle of radiation, time of day, and the specific ionosphere layer encountered, the signal could travel by skipping long distances nearly around the world. Frequencies over 30 MHz and up into the mmWave range travel by direct line of sight from antenna to antenna. These signals are usually reflected or absorbed, so range is generally limited.

5. We have totally run out of frequency spectrum.

Not completely, but we’re working toward that it seems. Most of the so-called “good” spectrum (~500 MHz to 6 GHz) is pretty much consumed, but plenty of spectrum exists at the higher frequencies beyond about 30 GHz.

Some say there’s a spectrum crisis as more wireless products and services are developed. One contributor to the shortage is the growing Internet of Things (IoT) movement. With billions of new devices coming on line, spectrum usage is something to worry about. But it’s the cellular industry that lusts after spectrum the most. The FCC hosts auctions to sell off available chunks of spectrum when they become available. Billions of dollars are collected.

6. Radio broadcasting is dead.

You may have gotten the impression that AM, FM, and TV broadcasting were on their way out thanks to all the internet streaming of music and video. But it’s not. While the number of AM stations has declined a bit, FM is growing. Satellite radio is also healthy. Furthermore, almost 20% of the U.S. population gets its TV by over-the-air (OTA) broadcasts. This includes satellite TV broadcasting. On top of that, short wave broadcasting is still around; not so much in the U.S., but it’s still big in Europe, the  Middle East, Africa, and other more remote parts of the world.

7. The most widely used wireless standard is Wi-Fi.

Wi-Fi is certainly a heavily used wireless standard. But in terms of sheer volume of radios in use, Bluetooth is probably the more widespread. It’s in all cell phones, most cars and trucks, headphones, speakers, retail beacons, and a mixed bag of other applications. It takes two chips to implement any Bluetooth applications. That’s why billions of Bluetooth radio chips are sold annually.

8. Cell phones give you a brain tumor.

That myth has been around ever since the first cell phones emerged in the late 1980s. It’s been studied multiple times, and the outcome is that cell phones don’t cause brain tumors. Perhaps if you held the phone to your head eight or so hours a day, you may get brain damage. But today, instead the process of holding the phone to your ear and head for a voice call has been replaced by holding the phone in your hands in front of you while you text, read email, or watch a YouTube video. No cancer.

9. Wireless data transfer is always faster than wired data transfer.

Not true. Wired data communications say by Ethernet or fiber optics, is very solid and usually faster than wireless. Ethernet can do 100 Gb/s and optical is now doing up to 400 Gb/s using PAM4. With a solid link, data can be faster because it doesn’t have to deal with all of the free space link and path problems of wireless.

Wireless free space path loss is very high; there’s always noise and interference that limits the data rate. But wireless has come a long way over the years with error correction, multichannel modulation like OFDM, MIMO, and phased arrays. As a result, wireless begins to approach wired speeds. Under ideal conditions, wireless data can hit levels of 10 to 100 Gb/s.

10. Rain and snow make satellite TV, phones, and data services unreliable.

You have probably heard of this one but it not true. Actually, at some frequencies in older systems, rain does attenuate the signal. But today, most components, equipment, and systems compensate for it with good link margins. We would not be using so many satellites if the coverage were iffy. What would we do without things like GPS, worldwide sat phones, space telescopes, and military surveillance?

11. Millimeter waves will never be practical.

Maybe that was true in the past, but today mmWaves are widely used thanks to the availability of semiconductor devices to generate and process these signals. Millimeter waves cover the 30- to 300-GHz range. All sorts of systems use them, especially radar and satellite. The 802.11ad WiGig WLAN products at 60 GHz are now available. Automotive radars use 77 GHz. And many of the forthcoming 5G cellular and fixed wireless access systems use mmWaves. Researchers are working on terahertz wave technology now.

There should be a wireless appreciation day to celebrate its existence. How about every day?

SPICE It Up: Understanding and Using Op-Amp Macromodels (.PDF Download)

$
0
0

Modeling the performance of an analog device with SPICE is one of the standard techniques used by circuit designers to perform initial characterization and performance analysis before putting together a real-world circuit and testing it on the bench (Fig. 1). SPICE is an acronym for Simulation Program with Integrated Circuit Emphasis. It was originally developed at UC Berkeley in 1973 to verify transistor-level operation of a design before building the integrated-circuit equivalent.

1. TI’s TINA-TI SPICE program helps analog designers solve circuit problems in the virtual world (Source: TI Blog: “SPICE it up: Why I like TINA-TI (part 1)”)

As a tool for the IC designer, the first SPICE programs used detailed mathematical formulas that modeled the behavior of each individual resistor, inductor, transistor, etc. An IC designer requires a high level of accuracy to emulate the expected performance of the final design. As the IC complexity increases, so does the level of processing power needed to run the SPICE simulation.

The design group can use a powerful engineering workstation that runs for several hours to yield a precise result. Circuit designers, though, are more interested in a program that simulates the main performance characteristics of a design so that they can find and fix obvious problems. The program must also produce results in a timely manner running on a typical desktop or laptop.

SPICE Model Development: Macro and Behavioral Models

Instead of modeling individual transistors, integrated-circuit manufacturers have developed macromodels that simulate most of the performance specifications of an op amp by treating it as a black box that contains mathematical functions representing the internal functions. The SPICE macromodel implementations are simplified groupings of the individual component equations, so the macromodel runs much faster than the design-group simulation and is more robust.

SPICE It Up: Understanding and Using Op-Amp Macromodels

$
0
0

Sponsored by Texas Instruments: Going through the simulation process can bring a greater level of confidence to your designs. Once problems are identified and corrected using SPICE, then build the circuit and test it on the bench.

Download this article in PDF format.

Modeling the performance of an analog device with SPICE is one of the standard techniques used by circuit designers to perform initial characterization and performance analysis before putting together a real-world circuit and testing it on the bench (Fig. 1). SPICE is an acronym for Simulation Program with Integrated Circuit Emphasis. It was originally developed at UC Berkeley in 1973 to verify transistor-level operation of a design before building the integrated-circuit equivalent.

1. TI’s TINA-TI SPICE program helps analog designers solve circuit problems in the virtual world (Source: TI Blog: “SPICE it up: Why I like TINA-TI (part 1)”)

As a tool for the IC designer, the first SPICE programs used detailed mathematical formulas that modeled the behavior of each individual resistor, inductor, transistor, etc. An IC designer requires a high level of accuracy to emulate the expected performance of the final design. As the IC complexity increases, so does the level of processing power needed to run the SPICE simulation.

The design group can use a powerful engineering workstation that runs for several hours to yield a precise result. Circuit designers, though, are more interested in a program that simulates the main performance characteristics of a design so that they can find and fix obvious problems. The program must also produce results in a timely manner running on a typical desktop or laptop.

 Sponsored Resources: 

SPICE Model Development: Macro and Behavioral Models

Instead of modeling individual transistors, integrated-circuit manufacturers have developed macromodels that simulate most of the performance specifications of an op amp by treating it as a black box that contains mathematical functions representing the internal functions. The SPICE macromodel implementations are simplified groupings of the individual component equations, so the macromodel runs much faster than the design-group simulation and is more robust.

Each analog circuit is different. Thus, designers carefully tune and test each macromodel to ensure that it accurately simulates the behavior of the real device. Nonetheless, subtle differences exist between the performance of the real and modeled devices. The macromodel, for example, may omit second- or third-order effects in the interest of execution speed, or use a piecewise-linear lookup table to represent a smoothly varying curve. As the cost of processing power declines, models are becoming more accurate, and manufacturers are continually refining them.

Another way to simulate device operation is through a behavioral model. This technique models the operation of several components, typically grouped as a functional block, with various SPICE devices. The focus is on the input and output characteristics of the block, so the SPICE model may bear little resemblance to the circuit in the device.

A behavioral model accomplishes its task using relatively few components. This speeds execution time but may omit some of the secondary circuit effects. Most SPICE programs include the capability to define behavioral functional blocks and add them to a macromodel.

Although a simulation can’t replace the real device, an accurate macromodel can help the designer quickly identify many design issues and potentially shorten the design cycle.

A Macromodel Example

What building blocks go into a macromodel? Figure 2 shows a macromodel developed by Texas Instruments to simulate the architecture of its CMOS op amps. Features include the input common-mode range, MOSFET input-stage transfer characteristics, accurate frequency and transient response, slew rate, rail-to-rail output swing, and output short-circuit current.

 

2. A typical op amp macromodel includes both transistor-level and behavioral blocks. (Source: TI Application Report: “AN-856 A SPICE Compatible Macromodel for CMOS Operational Amplifiers” PDF)

The op-amp macromodel is embodied in its SPICE netlist. It’s a text file that describes the interconnections between components—MOSFETs, diodes, inductors, capacitors, resistors, etc.—and standard blocks such as voltage-controlled voltage sources (VCVSs) and voltage-controlled current sources (VCCSs). The SPICE program contains standard mathematical representations of active and passive components and building blocks, letting the user specify the parameters for each one. With MOSFETs, for example, the user can choose between several device models with different levels of complexity.

The CMOS op-amp macromodel contains several stages. The input stage includes several characteristic features: nonlinear input transfer characteristics, offset voltage, input bias currents, second pole, and quiescent power-supply current. Its heart consists of a differential amplifier comprising two simplified MOSFET models. The input offset voltage is modeled with an ideal voltage source (EOS) while input bias and offset currents are modeled by properly setting the leakage currents on the input protection diodes DP1–DP4. Current source I2 and series resistors R8 and R9 model the quiescent current—the current through R8 and R9 increases as the supply voltage increases, effectively modeling the operation of the real device.

R8 and R9 also form a voltage divider that establishes a common-mode voltage VH for the model directly between the rails. VCVS EH measures the voltage across R8 and subtracts an equal voltage from the positive supply rail. The resulting node (98) provides a reference point for many stages in the model.  G0, another VCVS, and resistor R0 model the gain of the input stage.

The frequency-shaping stages add multiple poles and zeros to increase accuracy and precisely shape the macromodel’s magnitude and phase response. Each frequency-shaping stage has unity dc gain, making it easier to add poles and zeros without changing the low-frequency gain of the model.

The common-mode gain is modeled with a common-mode zero stage whose gain increases as a function of frequency. This stage consists of G4, L2, and R13. A polynomial equation controls VCCS G4: the circuit sums the voltages at each input pin (nodes 1 and 2) and divides the result by two to simulate the input common-mode voltage. The dc gain of the stage is set to the reciprocal of the amplifier’s common-mode rejection ratio (CMRR). Inductor L2 increases the gain of the stage by 20 dB/decade to model the CMMR roll-off that’s a characteristic of many amplifiers. The common-mode zero stage output (node 16) is fed back to the EOS source to provide an input-referred common-mode error.

The final frequency-shaping stage sends the signal to the output stage. This models several important functions including dominant pole, slew rate limiting, dynamic supply current, short-circuit current limiting, the balance of the open-loop gain, output swing limiting, and output impedance.  

The macromodel also simulates rail-to-rail output performance and the dynamic variation of supply current with load.  Although the model has many elements, it still executes more than 30 times faster than the individual-transistor model.

“Trust, But Verify” Works for SPICE Models, Too

The phrase “Trust, But Verify” rose to prominence during the 1980s, and was used by President Ronald Reagan many times when discussing relations with the former Soviet Union. The expression is just as applicable to SPICE macromodels: begin by assuming that the model is an accurate representation, but make sure to check its performance against the real device in a few key areas.

3. SPICE models are continually improving, but make sure you understand which datasheet specifications they will accurately reproduce. (Source: TI OPA326 Models)

Start by checking the manufacturer’s netlist and other documentation to make sure that you understand the capabilities and limitations of the model you’ve chosen. Figure 3 shows the comments section of the SPICE model for Texas Instruments’ OPAx320x, a precision, 20-MHz, 0.9-pA, low-noise CMOS op amp with rail-to-rail input and output (RRIO) performance.  The model’s applicability is clearly stated.

What are some of the key specifications to check?

Open-loop output impedance ZO is one area where many op-amp models fall short. Proper understanding of ZO over frequency is crucial for the understanding of loop gain, bandwidth, and stability analysis (Fig. 4).

4. An accurate model of output impedance ZO is key to modeling a circuit’s performance and stability. (Source: TI Blog: “’Trust, but verify’ SPICE model accuracy, part 4: open-loop output impedance and small-signal overshoot,” Fig. 1)

Spurred by changes in technology and design innovations such as rail-to-rail output voltages, class-AB bipolar-junction transistor (BJT) output stages have largely given way to rail-to-rail bipolar and common-drain complementary metal-oxide semiconductor (CMOS) designs. Consequently, ZO has changed from the largely resistive behavior of early BJT op amps to a complex impedance with capacitive, resistive, and inductive elements.

ZO appears in the macromodel’s small-signal path between the open-loop gain stage and the output pin. This impedance interacts with the op amp’s open-loop gain Aol, the load, and the feedback components to create the circuit’s overall ac response.

The test circuit in Figure 5 allows you to verify that a model’s ZO matches the data sheet. Inductor L1 provides a short circuit at dc to close the feedback loop while allowing for open-loop ac analysis; capacitor C1 provides an ac short to ground to prevent the inverting node from floating. The ac current source I_TEST back-drives the op-amp output. By measuring the resulting voltage at the output pin, you can determine the output impedance using Ohm’s law.

5. The recommended test circuit for output impedance. (Source: TI Blog: “’Trust, but verify’ SPICE model accuracy, part 4: open-loop output impedance and small-signal overshoot,” Fig. 2)

To plot ZO, run an ac transfer function over the desired frequency range and plot the voltage at Vout. Note that many simulators default to showing the results in decibels. If you plot the measurement on a logarithmic scale, Vout is equivalent to ohms.

Figure 6 compares the macromodel ZO to the datasheet for TI’s OPA202, a precision, low-noise bipolar amplifier. The macromodel is a close match. Both impedances are very flat (that is, resistive) up to around 1 MHz, a characteristic typical of classic bipolar amplifier designs.

6. The output impedance of the OPA202 macromodel closely matches that of the datasheet (Source: TI Blog: “’Trust, but verify’ SPICE model accuracy, part 4: open-loop output impedance and small-signal overshoot,” Fig. 3)

Of course, output impedance isn’t the only macromodel parameter you might want to verify. There’s an informative series of blogs discussing the testing of other op-amp SPICE parameters: common-mode rejection ratio (CMRR); offset voltage versus common-mode voltage (Vos vs. Vcm); slew rate (SR); open-loop gain (Aol); input offset voltage (Vos); and noise, including voltage noise and current noise.

Practical Applications: Stability Testing

Once we’ve verified the accuracy of the op-amp macromodel, we can use it in a practical application. For op amps, instability in a circuit can manifest itself as continuous oscillation, overshoot, or ringing, and can be very difficult to debug. A good model, combined with the powerful analysis tools available in SPICE programs, can help predict and stabilize op amp performance in the virtual environment before assembling the real circuit.

As an example, let’s use the macromodel for the OPA211, another low-noise precision bipolar op amp.

7. Checking the stability of an op-amp circuit often requires changing the “real-world” circuit. (Source: TI Archives: SPICEing Op Amp Stability, Fig. 2)

Checking the response to a small-signal step function or square wave is a quick and easy way to look for possible stability problems. Figure 7 shows the simulation circuit, which differs from the real-world design in several respects. The input filter would slow the input edge of a step function, so it’s disabled. Instead, the input test signal connects directly to the op amp’s non-inverting input.

Similarly, the output circuit of R4 and CL filters the signal Vout seen at the circuit output; Vopamp shows the true op-amp response.

We’re looking for the small-signal step response, so the applied input step is 1 mV. With a circuit gain of 4, this results in a 4-mV step at the output. A large input step would add slewing time and reduce the overshoot.

Figure 8 shows the results of the analysis. In the time domain, the overshoot of Vopamp is 27%. From the graph in TI’s Analog Engineer’s Pocket Reference, this translates into a phase margin of around 38%, below the 45% considered the minimum for a stable circuit. The frequency response indicates amplitude peaking, another tell-tale sign of instability.

8. The SPICE simulation uncovers problems in both the time and frequency domains. (Source: TI Archives: SPICEing Op Amp Stability, Fig. 3)

Although it’s possible to perform more in-depth SPICE analysis, this simple method is a good indicator of potential stability issues with a simple op-amp circuit. As we’ve discussed, the SPICE simulation is only as good as the op amp’s macromodel, and other components, parasitics in the printed-circuit-board layout, and poor power-supply bypassing can all affect the circuit.

SPICE simulation can provide a high level of confidence in your analog design, but it doesn’t guarantee performance. After identifying and correcting problems during the simulation phase, you should always proceed to build the actual circuit and test it on the bench.

Downloads, References, and Further Reading

Texas Instruments has teamed up with DesignSoft to offer TINA-TI, a SPICE-based simulation program that’s free to download (Fig. 1, again). TINA-TI is suitable for a wide range of analog and switched-mode power-supply (SMPS) circuits and comes preloaded with models for hundreds of TI parts. The program is easy to set up and use, and features powerful dc and transient analysis capabilities, an intuitive graphics-based interface (GUI), and on-screen contextual help.

Several application notes discuss SPICE macromodel development. This helpful application bulletin reviews and compares different op-amp macromodels: AN-856 details the development of a macromodel for CMOS op amps, and AN-840 reviews the current-feedback op-amp topology.

Finally, you can learn more about TINA-TI, read relevant blogs, or post SPICE-related technical questions at the Simulation Models forum of TI’s E2ETM Community.

 Sponsored Resources: 

What’s All This Software Stuff, Anyhow?

$
0
0

The only thing more frustrating than using software is writing it. These days you can't just avoid it like Bob Pease did.

Back in February 2010, Bob Pease sent an email titled “Confidence in Software??” Pease was well-known for his distate of computers:

It was no surprise that he didn’t think much of software. He wrote:

“I heard the story that, when the first Prius prototype was brought out for Mr. Toyoda to see a demo, they ‘turned the key,’ and... nothing happened. Apparently some idiot manager was so confident that they didn't even try out the car. It was a software problem. Are you surprised? Mr. Toyoda is sitting there and the driver turned the ‘key’ and nothing happened. Embarrassing.

“Now Toyota has got bad software in the brakes. First it won't go; then it won't stop. Are the Japanese inherently unsuited to do good software? I've never written any bad software, but I realize that many prototypes must be properly tested. I'm working on some systems now that will obviously need a lot of testing. Are you surprised? / Sigh. / rap”

Despite some bad experience with Japanese printer drivers, I don’t think bad software is the result of national origin. More bad software comes out of the USA than anywhere else. Think of those old HPGL plotter drivers. Indeed, a recent article in the Atlantic Monthly was titled “The Coming Software Apocalypse.” The URL base-name is equally amusing, “saving-the-world-from-code.”

The Atlantic article references the same Toyota unintended acceleration problem Pease alluded to in his email. It also acknowledges that software is wickedly complex, “...beyond our ability to intellectually manage.” Intellect cannot help us understand something as complex as a software program. If you take 20 issues of Electronic Design magazine, and arrange them on a shelf in all possible combinations, that’s 19! (19 factorial). That’s more seconds than the universe has existed since the Big Bang. Even simple embedded programs may have 20 modules with many more branches.

We knew decades ago that it is impossible to generate all possible test vectors for large programs. My programmer buddy says, “Over 100,000 lines of code, and you just poke it with sticks, hoping it doesn’t break somewhere else.”

It Takes a Rocket Scientist

Even common apps and programs are as complex as the Space Shuttle. Don’t be surprised if they blow up every few years. A major problem is the complexity is hidden from users, managers, and even other programmers. Some bad software comes about because the engineers are under time pressure, so they just take demo code from various chips and modules and try to string them together into a functioning program. Other bad software comes about from lazy or sloppy programming.

Another programmer buddy told me “If people could watch code run, they would die laughing.” When I pressed him for an example, he noted that the program might spend considerable time creating some giant data structure in memory. Then the program would pluck just one byte out of all that data. And I know for a fact that sloppy code makes those giant data structures into memory leaks that eventually crash the program.

I asked John Gilmore, a co-founder of the Electronic Frontier Foundation and Sun Microsystems, about his thoughts on software. He wrote back:

“I've worked with lots of software engineers, and with lots of hardware engineers. I have managed a lot of software engineers too. One thing I've noticed is that hardware engineers are a lot better (as a class) at accurately estimating how long it will take them to get a project done.

“My own suspicion about this is that it's because of the inherent differences in complexity. Hardware designers have the benefit of the laws of physics, which mean that something happening over on the left-hand side of the board (or chip) probably won't affect something happening on the other side—unless there's an explicit wire between the two, etc. So you can get one subsystem working at a time, and it's unlikely that changes in a different subsystem are going to mess up the part that you just got working.

“But with software, it's all happening in one big address space, in random access memory (RAM). Every memory location is equidistant from every other memory location. Anything can touch anything else. And if one of those things contains some kinds of errors, such as stray pointers, then they can affect absolutely anything else in the same address space. So, in effect, the whole thing has to be working, or at least well-behaved, before a piece of software can be debugged in a predictable amount of time.”

This reminds me of IC design or RF engineering. In analog IC design, the location of devices on the die can interact and cause latch-up or failure. With RF engineering, the radiation from one component on your board will affect other components. It’s a large interactive system. Yet software is many orders of magnitude more complex and more interactive.

The Atlantic article maintains that “The FAA (Federal Aviation Administration) is fanatical about software safety.” No, the FAA is fanatical about software documentation. So was the Army, which required contractors to comment every line of code. So we would have silly comments like “Move register C to the accumulator.” Factually correct, but absolutely useless to understand the design intent. A friend who works on products under FAA certification tells me it’s the worse code he has ever seen. One problem is that it never gets patched or upgraded, since that would trigger another expensive time-consuming FAA review.

The Software Experience

Unlike Pease, I have done some software. I once made a PC test fixture to exercise a wafer elevator in a semiconductor manufacturing machine. The elevator had a stepper motor and an optical “home” flag. I used Basic. The program was 200 lines long. I naively just tried to run it, and of course, nothing happened. I ended up going two or three lines at a time, adding semaphores and flags to figure out what was happening.

Like Gilmore noted, it took me much longer than I anticipated. It did finally work. I was delighted when I compiled it in QuickBasic and the program ran much faster. That’s the beguiling thing about software writing and use. It frustrates you for hours, but the payoff is so satisfying, you keep doing it. It’s like 3-cushion billiards versus bumper pool.

I was working at a startup doing embedded design of point-of-sales systems. Four of us had been up all night trying to figure out a particularly insidious bug. We cracked it about 5:00 AM. That’s when my pal Wayne Yamaguchi gave an extemporaneous speech about building quality software. He noted that the lower-level modules like serial communication and timer loops had to be rock-solid, just like the foundation of a skyscraper. The code that called these modules could be a little less tested, but it still should be rigorously proven out. When I think of some modern software, it seems more like a user-interface mountaintop held up by some bamboo sticks, ready to collapse at any moment.

TeamBuilding

To build a successful skyscraper you need architects, engineers, and interior decorators. The architect has to mediate between the fanciful interior decorators and the down-to-earth engineers. If the decorators want some giant unsupported glass wall, the architect has to check with the engineers to make sure this is possible. If the architect wants some cantilevered open-span structure, she knows to check with the engineer to make sure it won’t collapse.

When I have worked with website software teams, I noticed they were mostly interior decorators, with a few programmers. There were no website architects. The marketing or design people wanted snazzy rounded corners and pop-up light-boxes and all kinds of trendy features. The engineers just put their nose to the grindstone and tried to deliver. Then the marketing people were surprised to see the website slow, buggy, and not compatible with all browsers. Nothing was architected. It was non-technical people demanding things work a certain way, and powerless engineers in the trenches trying to make things work that way.

I saw one engineer spend a day trying to make nice radiused corners on a box. When his boss gave him guff for taking so much time, he explained that getting rounded corners on all browsers without using non-approved JavaScript was not easy. When they told the marketing types, the reply was, “Why didn’t you tell us? We could have lived with square corners.” No architect.

That same programmer used to work at eBay many years ago. He noted a vice president had come from Oracle and insisted that the customer-facing servers be 170 Windows computers. Being Windows, it was necessary to reboot the machines every few days so that the inevitable Windows memory leaks didn’t crash the computer.

Problem was, as long as any given customer would have an eBay session open, you could not just reboot and cut him off. So the system administration was designed to stop any new pages views on that one machine, while waiting for all existing sessions to end. Then the computer could be rebooted and start to accept new page requests. It might take days.

It’s unusual a former Oracle guy would be so adamant about how a front-end system should work. Oracle is more of a database company, concerned with the “back end” part of website functionality. My pal said the VP finally left, and eBay installed middleware that could assign any disparate page request to whatever machine was available. Then it was trivial to do a reboot without destroying an ongoing user interaction.

To me, that says one of the biggest websites on earth didn’t have an architect. We all know how this happens. Something is slapped together to get working fast. Things are added onto that. Nobody ever just tosses all of the code and starts over. I wish they would; I still find eBay infuriating for many reasons.

Another example of how hard it is to do software involves Linux versus a microkernel operating system. Before Linus Torvalds came up with Linux in a nine-month session of binge-work, the hot topic was making a UNIX-like operating system using a microkernel. There would not be one large body of code in a kernel. Instead, there would a slew of small microkernels to do various OS tasks independently. Richard Stallman was championing this approach and working hard to come up with a working system. Torvalds abandoned the microkernel, and just did a conventional one. He made it to market, whereas Stallman’s effort stalled. Microkernels are a great intellectual exercise for college professors, but it’s just too hard to debug and get working. Stallman has admitted as much.

Some programmers like to emulate a computer, mentally doing one instruction after another. I had a boss that called this “linear brained.” They are what my one software pals calls procedural programmers, who think like a flow chart. They don’t so much as think as parse a linear stream of deterministic instructions.

My pal told me it’s much harder to do event-driven programming. Here the user might press a button on his Palm Pilot and the program has to drop everything, service that button, decide what to do based on where it was, and store or pop the previous event on the stack so it can do something new. A hardware guy like me would call it interrupt-driven programming. To realize how hard it is to do microkernels or event-driven programming, think about how you would implement an “undo” function in such a system.

Modes of Modeling

The Atlantic article talks about model-based design like it’s a new thing. Companies like ANSYS were talking about model-based design years ago. Simulink works in Matlab to characterize multi-domain dynamic systems using model-based design, as seen in this video:

If model-based design gets you a working system sooner, that’s great. Mentor Graphics sells software to help understand the interaction of multiple CAN (controller area network) buses. Modern autos often have multiple CAN buses as well as other buses such as LIN, MOST, and VAN. With 100 million line of code in a modern car, the engineers need all of the tools they can get to help ensure the software runs as intended.

Companies like National Instruments gave us LabVIEW three decades ago. This allows for better architecture, since it’s so visual. You can finally see the software and how it works. There are pretty little boxes you hook up, that can be understood by other programmers at a glance. Then again, there’s also a place for a few lines of old-fashioned code, which LabVIEW will let you put inline. It’s the best of both worlds.

I see Arduino as a hopeful future for software. It took me less than an hour to get an Arduino board communicating over the USB port, while sensing switches and lighting LEDs. I had done assembly language programming, but not C or C+. Linear Technology, now part of Analog Devices, has demo systems they call Linduino, since it builds on the Arduino integrated development environment to take large amount of data from an LT chip, such as an analog-to-digital converter.

The problem with any design, model-based or otherwise, is that you have to understand the model. It was the night before the show. I was working with my boss on a prototype wafer-scrubbing machine that had to be at the show the next day. He had designed the machine to run on an RS-485 bus. There was one cable that would send commands to various parts of the machine. It was a very neat idea.

It was working OK until the night before the show. Then it started breaking wafers. It took most of the night to realize that when we sent the “find home” instruction to the wafer handler, it would run the stepper motor until it found home. But the reply that it was at home got clobbered by other traffic on the bus. So the system would send another “find home” instruction. That one tended to get clobbered, too; there was all kinds of asynchronous traffic on the bus. After the third or fourth “find home” instruction, one of the “at home” replies got through to the operating system. Thing is, it was not at home—it was many steps past home. Hence the broken wafers. That was a hard lesson in how you have to acknowledge instructions and replies on a bus, or things go haywire. No amount of model-based design would have helped us. We hardware guys just didn’t understand good bus protocol.

The consultant that architected OrCAD told me that he doesn’t start writing code until he has three different ways to solve the problem. Then he evaluates all three for those unintended consequences we all suffer with. A wag has noted, “The best programming language is the language your best programmer likes best.” The thing is, software is a language, and it’s one we hardware folks don’t always get

I think of the Matrix movie where the guy could look at the numbers and symbols dripping down the screen and see what was happening. Sometimes software can be the most abstract mathematics made real and beautiful. Other times, it can be a programmer punishing the world for teasing him in Junior High School. We are going to get both, but its sure to be more of the beautiful as this new thing called software development gets improved and perfected.

Combine Modified CD Optical Assembly, PSoC for High-Resolution Microgram Measurements (.PDF Download)

$
0
0

The ability to measure microgram and milligram masses with extremely high accuracy is often critical in pharmaceutical and chemical industries. This design combines a pickup head from a compact-disc (CD) system and a millimeter-sized cantilever mounted on the CD’s optical pickup head (OPH) to build an alternative, cost-effective microbalance. This simple setup interfaces to a personal computer through a PSoC microcontroller for read out (Fig. 1).

1. The microbalance setup uses a standard CD’s optical pick-up head with an attached millimeter-size cantilever supported with a PSoC embedded design.

Since the CD OPH is designed to measure the variation in the reflecting surface of CD, this design offers resolution of better than 2 μm for the displacement of the added cantilever, which is 25 mm long, 6 mm wide, and 0.43 mm thick (Fig. 2). It offers microgram-scale accuracy at lower cost and with greater sensitivity than microbalances based on the piezoelectric effect.

2. Shown is the CD’s optical pick-up head mounted with millimeter-size cantilever (this is the key element of this mass sensor arrangement) connected with an embedded PSoC design functioning as a microbalance.

In normal CD-player operation, the unit’s OPH laser diode (780 nm) uses a linearly polarizing beam splitter, collimator, objective lens, and anamorphic lens placed in front of the detector array, plus a focusing voice coil and its drive motor. The laser beam is directed to the specimen via the objective lens, which is actuated by the voice-coil motors. The CD surface, in turn, scatters the beam back through the objective lens to the beam splitter, where it’s directed via the anamorphic lens onto the four-quadrant photodiode (QPD).


Combine Modified CD Optical Assembly, PSoC for High-Resolution Microgram Measurements

$
0
0

Use a standard CD drive’s optical pickup head as the basis for a high-resolution mass-measuring scale, which can read down in the microgram range, in conjunction with a PSoC system.

Download this article in PDF format.

The ability to measure microgram and milligram masses with extremely high accuracy is often critical in pharmaceutical and chemical industries. This design combines a pickup head from a compact-disc (CD) system and a millimeter-sized cantilever mounted on the CD’s optical pickup head (OPH) to build an alternative, cost-effective microbalance. This simple setup interfaces to a personal computer through a PSoC microcontroller for read out (Fig. 1).

1. The microbalance setup uses a standard CD’s optical pick-up head with an attached millimeter-size cantilever supported with a PSoC embedded design.

Since the CD OPH is designed to measure the variation in the reflecting surface of CD, this design offers resolution of better than 2 μm for the displacement of the added cantilever, which is 25 mm long, 6 mm wide, and 0.43 mm thick (Fig. 2). It offers microgram-scale accuracy at lower cost and with greater sensitivity than microbalances based on the piezoelectric effect.

2. Shown is the CD’s optical pick-up head mounted with millimeter-size cantilever (this is the key element of this mass sensor arrangement) connected with an embedded PSoC design functioning as a microbalance.

In normal CD-player operation, the unit’s OPH laser diode (780 nm) uses a linearly polarizing beam splitter, collimator, objective lens, and anamorphic lens placed in front of the detector array, plus a focusing voice coil and its drive motor. The laser beam is directed to the specimen via the objective lens, which is actuated by the voice-coil motors. The CD surface, in turn, scatters the beam back through the objective lens to the beam splitter, where it’s directed via the anamorphic lens onto the four-quadrant photodiode (QPD).

From the characteristics of the anamorphic lens, the beam shape at the detector array appears as either elliptic or circular. The circular shape occurs only if the reflective surface of the specimen is placed exactly at the focal plane of the optical path.

3. LabVIEW front panel screen for a typical mass measurement.

Here, light reaches the QPD via a small piece of reflective foil in place of the CD, and the outputs of the QPD indicate the deflection of the cantilever beam. The differential output signal [(A+C) - (B+D)] represents the error signal from the QPD detector and is the deviation signal from the concentric circular laser beam spot due to cantilever displacement. This output is converted into digital format and displayed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) (Fig. 3), and was programmed on a single-chip PSoC microcontroller that supports the required analog and digital blocks (Fig. 4 and Code Listing).

4. The flow chart shows how the readings from the QPD are converted into mass-related ADC counts, which are then used by the PSoC display.

PSoC Creator programming code list for the CD’s OPH read-out.

Test and Evaluation

Tiny pieces of copper from stranded wire were used for evaluation. When no mass is added on the cantilever attached above the OPH, the ADC count is 13,890. Test mass was calculated using measured pieces of a wire strand with 0.30025-mm radius, so the volume of a cut piece is [(πr2) × l] (where r is the strand’s radius and l is its length).

Estimated copper-strand mass and the corresponding ADC counts as obtained via deflection of the attached cantilever.

The table shows the estimated mass for various lengths of copper (copper density is 8960 kg/m3) and the corresponding ADC measurement counts in the PSoC design, as obtained through cantilever deflection. Figure 5 plots this data and inidcates the linearity achieved.

5. The plot shows the PSoC ADC counts from a PSoC embedded design versus the estimated mass for various lengths of copper-wire strand (radius is 0.30025 mm).

Tiny rectangular pieces of notebook paper were also used, with masses in the range of 75 to 700 μg. The mass of each piece was estimated from the basic formula of mass = paper-piece volume × ρ, where ρ is the density of the paper material (here, 250 kg/m3). The ADC count of the PSoC read-out interface versus mass is plotted in Figure 6.

6. Mass measurement of tiny paper rectangles of measured volume (length × breadth × thickness) with resultant estimated mass plotted versus the PSoC ADC counts.

These measurements confirm the successful operation of this microgram mass balance in ambient environment with a wider dynamic range of measurement by exploiting the cost-effective optical pickup head of a CD drive. For improved performance, the entire assembly can be placed inside a transparent desiccator chamber. In principle, this mass-measurement approach can be extended to measure nanogram-level displacements by using a Blu-ray CD with its 405-nm laser in place of the 780-nm laser of the standard CD unit.

Nandhini Jayapandian is pursuing her Master’s in engineering at Drexel University, Philadelphia, Pa. Janani Jayapandian is a software analyst and an engineering graduate in Electronics & Communication Engineering from Sathyabama University, Chennai, India. Their interests include embedded designs using PSoC, VLSI designs for automation of experiments and Industrial equipment, and MEMS sensors and Virtual Instrument Programming using LabVIEW.

References

Yuan-Chin Lee, Shiuh Chao, “A Compact and Low- Cost Optical Microscope,” IEEE Transactions on Magnets, Volume 50, Issue 7, July 2014).

G Gerstorfer, BG Zagar, “Development of a Low-Cost Measurement System for Cutting Edge Profile Detection,” Chinese Optics Letters, 2011.

“Measuring the Deflection of the Cantilever in Atomic Force Microscope with an Optical Pick up System,” 45th IEEE Conference on Decision & Control, December 13-15, 2006.

Photon/Phonon Conversion Sheds New Light on Optical Signal Processing

$
0
0

To facilitate processing of signals in the high-speed optical domain, there must be a way to store and retrieve. Converting them to acoustic phonons may offer a solution to this vexing challenge.

The processing of optical signals, with their real-time, streaming nature, would be greatly enhanced if there was a mechanism for writing, storing, and reading back the data. It’s the same situation with our familiar electronic data, of course, which is why “memory” is a major system function. Phonons—quanta of coherent acoustic vibrations—may offer a way to store and retrieve optical signals, and at a speed commensurate with the optical domain  

To pursue this objective, researchers at the University of Sydney in Australia have demonstrated a buffer built as an optical waveguide that works by coherently translating the optical data to an acoustic hypersound wave. Optical information is extracted using a complementary process. The result is the storage of phase and amplitude of optical information with gigahertz bandwidth as well as multichannel operation at independent wavelengths, and with negligible crosstalk.

Their detailed paper in Nature, “A chip-integrated coherent photonic-phononic memory,” details the theory, actual implementation, and results. The objective is to develop components for storing or delaying optical signals, enabling mostly optical processing and thus the speed and performance benefits that development would bring. Signal “delay” is needed so that data can be briefly stored and managed inside the integrated device for processing prior to retrieval and transmission.

By using coherent phonons, the velocity of the optical pulse stream was reduced, leading to the ability to write and retrieve the optical data pulses as acoustic phonons. (Note that these hypersound phonons have wavelengths comparable to optical photons, but with velocity that’s five orders of magnitude lower.) Furthermore, optical data-transmission systems generally use multiple wavelengths to increase the overall capacity. Therefore, a storage process would be much more useful if it worked over a wide frequency range (supporting a large number of channels), and it must limit crosstalk between the optical channels.

In the photonic-phononic memory, the storage process (a) uses an optical data pulse that’s depleted by a strong counter-propagating write pulse, storing the data pulse as an acoustic phonon. In the retrieval process (b), a read pulse depletes the acoustic wave, converting the data pulse back into the optical domain; all done using an experimental setup with a simplified basic schematic (c). The inset in the lower-left corner shows a chalcogenide chip containing more than 100 spiral waveguides of 6-, 11.7- and 23.7-cm lengths.

Their innovative approach (see figure) differs from existing schemes for coherent optical storage, as it uses acoustic phonons traveling in a planar, integrated optical waveguide. Since their buffer does not rely on structural resonance, it’s not limited to a narrow bandwidth or single wavelength operation.

The information carried by the optical signal is transferred to the acoustic phonons using stimulated Brillouin scattering (SBS)—a highly nonlinear optical effect—that results in coherent coupling between two optical waves and an acoustic wave. The intensity of the opto-acoustic interaction is increased several orders of magnitude by using carefully designed waveguides that guide optical as well as acoustic waves, allowing storage of broadband optical signals in a planar waveguide without relying on a resonator geometry. (Note that this is a greatly simplified representation, as the actual setup includes a continuous wave, single-sideband, or SSB, modulator; intensity modulator; pulse generator, bandpass filter, photodetector, local oscillator, and Brillouin frequency shifter; additional details are here). 

A strong optical-write pulse, offset by the acoustic-resonance frequency of the optical-waveguide material, propagates counter to the optical data pulse. When the two pulses meet, the resultant beat pattern between them compresses the material periodically (a process called electrostriction), and excites resonantly and locally a coherent acoustic phonon.

For the storage medium, they used a spiral waveguide made from the chalcogenide glass (materials containing one or more of the chalcogen elements such as sulfur, selenium, and tellurium, and the most suitable for SBS) with a “rib” waveguide structure having a cross-section of 2.2 μm by 800 nm, with more than 100 spirals that have lengths ranging from around 9 to 24 cm. The overall chip length is about 30 mm.

The team used six different amplitude levels in the first pulse, and maintained the amplitude level of a second data pulse constant as a reference. A comparison of the original and retrieved pulse demonstrated that they were able to easily distinguish among the six different amplitude levels. Thus, they were able to confirm the multi-wavelength capabilities of their memory, with operation at several different wavelengths along with minimal crosstalk between those multiple-wavelength channels.

Using Resistors for Current Sensing: It’s More Than Just I = V/R

$
0
0

Sensing current by measuring voltage across a resistor is simple and elegant, but issues arise with the electrical interface, sizing and selection, and thermal/mechanical considerations.

Measuring dynamic current flow has always been an important parameter for managing system performance, and doing so has become even more so with the proliferation of smarter management functions for devices and systems. The most-common way to accurately make this measurement is by using a sense resistor of known value inserted in series with the load, then measuring the IR voltage drop across this resistor. By applying Ohm’s law, determination of the current flow is simple—or at least that’s how it seems.

While using a resistor is an effective and direct basis for such sensing, it also has many design issues and subtleties despite its clarity. These span the electrical interface, resistor sizing and selection, and many mechanical considerations:

The Electrical Interface

Do you go with high-side or low-side sensing? In low-side sensing, the resistor is placed between the load and “ground” (or, in many cases, circuit “common”) (Fig. 1), which enables the voltage-sensing circuit to also be connected directly to ground. While components in this topology aren’t subject to any high-voltage issues, it’s often undesirable and even unacceptable, for two reasons.

1. Low-side sensing places the resistor between the load and common; it simplifies the interface to the voltage-reading analog front end but brings problems with load integrity and control.

First, doing so means the load itself isn’t grounded, which is impractical for mechanical reasons in many installations. For example, having an ungrounded starter motor in a car and insulating it from the chassis is a design and mounting challenge. It also mandates the need for a return wire that can carry the load current back to the source, rather than using the chassis. Second, even if wiring and mounting aren’t considerations, placing any resistance between the load and ground (common) negatively affects the control-loop dynamics and control.

2. High-side sensing is the more commonly used approach, despite the fact that it brings new issues of common-mode voltage.

The solution is to use high-side sensing, with the resistor instead placed between the power rail and the load (Fig. 2). This eliminates the problems created by ungrounding the load, but a new issue arises. The circuitry that senses the voltage across that resistor now can’t be grounded, which means a differential or instrumentation amplifier is used. This amplifier must have a common-mode voltage (CMV) rating that’s higher than the rail voltage. In general, if the rail voltage is higher than the CMV rating of standard ICs (typically, about 100 V), then more complicated approaches to providing this interface are needed.

An alternate approach is to use interface circuitry that includes galvanic isolation between the sensing amplifier input and output (Fig. 3). This means no ohmic path exists between the two analog sections—it appears like a non-isolated amplifier, except for the internal isolation.

3. Galvanic isolation can be achieved in several ways. Regardless of approach used, the result is signal information is passed across a barrier without any ohmic path between input and output.

The approach brings other beneficial features as well: It greatly improves system performance by eliminating ground loops and associated issues; simplifies subsequent circuitry, and eases or eliminates safety-related layout and wiring requirements on clearance and creepage; adds an electrical safety barrier between the high voltage and the rest of the system; and is mandated by safety and regulatory standards in many applications.

Isolation can be implemented using an all-analog isolation amplifier; alternatively, a subcircuit comprised of a non-isolated amplifier followed by an analog-to-digital converter and isolator (which may use optical, capacitive, magnetic principles), all operating from an isolated power supply that’s independent of the main supply, can be used (Fig. 4). Regardless of the isolation solution selected, the voltage-sensing circuit for higher rail voltages can become complicated with respect to BOM and layout, but there often is no other practical option.

4. Isolation is sometimes implemented in the digital domain using an isolated front end (amplifier, ADC, isolator) and an isolated power supply. This AD7401A places all of the needed functions in a single package. (Courtesy of Analog Devices Inc.)

Resistor Sizing and Selection

Ideally, the sensing resistor value should be relatively large so that the resultant voltage drop will also be large, thus minimizing effects of circuit and system noise on the sensed voltage, as well as maximizing its dynamic range. However, a larger value at a given current also means there is less voltage—and thus less available power—for the load due to IR drop, as well as I2R resistor self-heating, wasted power, and added thermal load. It’s clearly a tradeoff and compromise situation.

In practice, it’s generally desirable to keep the maximum voltage across the sensing resistor to 100 mV or below, so that the corresponding resistor values are in the tens-of-milliohms range and even lower. Sense resistors are widely available in these small values; even 1-mΩ and lower resistors are standard catalog offerings (Fig. 5). At these low values, even the resistance of the ohmic contacts of the sensing circuitry is a factor in the calculations.

5. This 0.2-mΩ current-sense resistor handles up to 200 A and can dissipate 15 W. It measures 15 × 7.75 × 1.4 mm, and the special alloy construction features a TCR of ±100 ppm/°C. (Courtesy of TT Electronics)

The dilemma of resistor selection doesn’t end with determining a value that balances the tradeoffs between voltage and power loss versus readout range. First, the resistor dissipation creates self-heating, which means the selected resistor type must have a suitable power rating, and it must be derated at higher temperatures.

Also, any self-heating will cause the resistor to drift from its nominal value. How much it drifts depends on the material and construction of the sense resistor. A standard chip resistor has a temperature coefficient of resistance (TCR) of about ±500 ppm/°C (equal to 0.05%/°C), while standard sense resistors fabricated with special material and construction techniques are available with TCRs of ±100 ppm/°C, down to about ±20 ppm/°C. There are even precision-performance units offered (at a much-higher cost) down to ±1 ppm/°C.

Note that using a snip of copper wire or PC-board track might seem like a good way to get a milliohm-valued sense resistor at nearly zero cost. However, the TCR of copper is around 4000 ppm/°C (0.4%/°C), which is orders of magnitude inferior to a low-TCR sense resistor.

In some cases, a viable tactic for reducing the temperature rise due to self-heating is to use a larger-wattage, which will be less affected by self-heating. But these, too, have a somewhat higher component cost and larger footprint. The designer must do a careful analysis of the current, the dissipation, the effects of TCR, and any derating needed for long-term reliability and performance  

Mechanical Considerations

At very low current levels, the physical size of the current-sense resistor is about the same as other resistors. But physically larger resistors are needed as wattage rating increases, and this will have an impact on both PC-board layout—assuming the resistor is board-mounted—and the thermal situation of the both the resistor and its surroundings.

For the higher-rated resistors, placement and mounting becomes a serious issue; PC-board surface mounting may not be an option; and real estate and thermal issues increase significantly. Larger units may even need mounting brackets or hold-downs to keep motion and vibration effects down to an acceptable minimum.

The difficulty of making the “simple” electrical connections shouldn’t be overlooked either. When wires are carrying tens or even hundreds of amps, the connections between those wires and the resistor’s terminations need careful planning and larger, more rugged surfaces, which, of course, may include screws and clamps. Just think of the typical internal-combustion car battery that must deliver over 100 A to start the car, from a modest 12-V battery. Even 100-mΩ contact resistance at the battery plus terminal translates to a supply loss of 1.2 V, in a scenario where there isn’t much voltage headroom.

 In addition, even though the sensed voltage is low, the common-mode voltage may not be low, and the connections may be carrying high currents. As a result, there are safety and access issues that will affect cabling, routing, possible short circuits, and accessibility. Further, the designer must plan where and how to connect the relatively thin-gauge voltage-sensing wires to the contacts that also carry the higher load current. Resistance of the sensing-wire contacts may look like resistance that’s part of the sense resistor itself, and so the I = V/R calculations need to factor in this additional resistance. Even the TCR of any contacts can also be an issue in higher-accuracy situations.

Using a resistor for current sensing is a very educational example of the challenges of going from a solution that’s very simple in principle, to one which works, works well, and works over the range of expected operating conditions of the application. Fortunately, it’s also a solution that’s used extensively, so many of the issues can be resolved by leveraging the experience of application engineers from resistor vendors or experts in high-current sensing.

Capacitive Sensing: A Paradigm Shift for Encoders

$
0
0

Sponsored by Digi-Key and CUI Inc.: Optical and magnetic encoders dominate the market, but the capacitive electric encoder is pushing its way into the mix with enhanced resolution, ruggedness, and programmability.

Download this article in PDF format.

Encoders are a vital component in many applications that require motion control and feedback information. Whether a system’s requirement is speed, direction, or distance, an encoder produces control information about the system’s motion.

As the direction of electronics strives toward higher resolution, ruggedness, and lower costs, the encoder sensing mechanism is also improving in these areas. Traditionally, the encoder’s sensing mechanism has been optical or magnetic. But there’s a new player in town: the capacitive electric encoder. We’re going to talk about this new player and how it measures up to the encoding environment’s accuracy and ruggedness requirements. 

 Sponsored Resources: 

1. Different encoder types developed by CUI. (Courtesy of Digi-Key)

Encoder Classes

An encoder is a device that converts information from one format to another. In this article, the encoder is a motion detector with an electrical drive element. There are two different classes of electrical encoders: linear and rotary.

A linear class measures motion along a straight path, providing position, speed, and direction. This encoder has a sensor, transducer, and a location reader. An analog or digital output signal relays the encoder’s position to the system’s receiver. The encoder reads the information and identifies the encoder’s position. The encoder can provide speed or velocity data over time and with two sensors, the determination of direction is possible.    

A rotary class, also called a shaft encoder, is a must-have in a hobby shop for driving motors. This device activates motors and measures rotational motion or angular position. The rotary encoder also has a sensor, transducer, and location reader. The reader relates the current position of the shaft to provide angular data to the user. The encoder also can provide speed or velocity data over time and with two sensors, it’s possible to determine the encoder’s direction. 

Two Different Encoder Categories

The encoder output governs the category of the encoder’s category: incremental or absolute. An incremental encoder provides speed and distance feedback of any linear and rotary system. This category of the encoder device generates a plus train or a square wave to determine position and speed. Typically, the output signal is either zero or the supply voltage.

An absolute encoder generates unique bit configurations to track positions directly. The number of bits describes the resolution of these encoders. For a magnetic encoder, the number of magnetic poles determine that resolution. For higher-resolution optical encoders, there are more open segments for the light to shine through.

The physical implementation uses the three fundamental sensing fields to capture the element of motion. These three fields are optical, magnetic, and electric. The optical encoder incorporates a light-emitting diode (LED) and photodetector pair. The magnetic encoder uses magnets to create electromagnetic fields. The capacitive electric encoder employs capacitors and electrical signals. All three of these sensing fields provide the absolute and incremental information for the linear and rotary encoder classes.   

Optical Sensing Encoder

The most popular encoder field is the optical encoder, which can provide high-resolution results. This device has an LED light source, a notched rotating disk, and a photodetector. For a rotary encoder, the shaft rotates the disk to allow emitted light through disk notches to impinge on a photodetector. As the rotor spins, the resulting pulsed signal lands on the photodetector.

2. An optical encoder operates by sensing the through windows as the shaft rotates. (Courtesy of Digi-Key)

The optical encoders can successfully deliver high-resolution results, with an increase in the optical disk notches. The main components of the optical encoder are the optical disk, LED light source, photodetector, transimpedance amplifier, and analog-to-digital converter (ADC). Of these elements, the LED light source requires a high constant current during operation and a higher physical profile.

It’s useful to note that an optical encoder’s construction is fragile and highly sensitive to dust, dirt, and other environmental contaminants. Vibrations and temperature excursions can also be damaging or alter performance.

The optical encoder’s immunity to electromagnetic interference (EMI) depends heavily on the internal circuit layout and shielding. The current consumption typically is equal to or higher than 20 mA. This is especially critical in battery-powered applications. Finally, the LEDs have a limited half-life of approximately 10k to 20k hours, or roughly one to two years. As compared to the magnetic encoders, optical encoders are capable of higher-resolution and thus higher accuracy.

Magnetic Sensing Encoder

The core of a magnetic rotary encoder is a large magnetized wheel. This device generates output data by detecting changes in magnetic flux fields. With the rotary encoder, the motor turns the shaft and rotating disk. The disk contains alternating, evenly spaced, north and south magnetics; magnets are located on the outer, circular edge. This disk arrangement provides a rugged and robust solution.

The wheel spins over a plate of Hall-effect or magnetoresistive sensors. These sensors detect shifts in the positions of the north>>south, south>>north magnets. As the disk spins over the mask, there’s a predictable response in the sensor. A signal-conditioning circuit receives the magnetic responses and translates the signal into speed or position. The number of magnetized wheel pole-pairs and sensors, along with the type of electrical circuit, work together to determine the resolution of the encoder.

The magnetic encoder provides a very rugged solution, bringing high immunity to electrical and magnetic interference. The encoder also has a mechanically durable housing with no sensitive internal components, such as an LED. As compared to the optical encoders, magnetic encoders avoid the seal, vibration or impact, bearing failures, and LED degradation. Oil, dirt, and water don’t affect the encoder’s magnetic fields; however, the magnetic encoders aren’t as accurate as their optical counterparts.

Capacitive Electric Encoder

The capacitive electric encoder has three main parts: rotor, stationary transmitter, and stationary receiver. The capacitor rotor replaces a typical optical disk or magnetic disk with a printed circuit board (PCB) that has an etched sinusoidal pattern.

The rotor rotates, which causes the etched pattern to modulate in a predictable way. The measured capacitance-reactance in this system is between the two stationary boards as the rotor rotates. In the three-plate topology, the rotor is between the stationary transmitter and receiver boards.

3. Shown is the basic operating principle of a capacitve encoder. (Courtesy of Digi-Key)

A measurement is taken of the changing capacitance-reactance between the stationary boards as the rotor rotates. The capacitive encoder detects the change in capacitance-reactance by sending a high-frequency reference signal from the transmitter board and observing the field changes with the receiver board.

The raw signals of the capacitances in this system create a time-varying voltage. Two synchronous-demodulators followed by low-pass filters process this amplitude-modulated carrier to provide dc sine and cosine outputs. The cutoff frequency is highly flexible and limited by the internally generated noise. This noise is proportional to the square root of the cutoff frequency and sets a limit to the resolution. The standard capacitor encoder’s cutoff frequency is 1 kHz.

The total field integrates and converts the encoder’s output into a current, where the on-board electronics process that signal. This total process provides several dc output signals that are proportional to the sine and cosine rotation angle. This receiver essentially captures the modulated signals and translate them into increments of rotary motion.

The capacitive sensing encoder provides a combination of performance features that optical and magnetic technologies cannot match. This encoder provides fine and coarse channels that generate position and speed information, which is available through digital programming techniques. The combination of these two functions along with high-resolution ADCs and repeatable capacitances create a highly accurate environment. This is all accomplished with a low-power system that’s immune to magnetic and electric interference. The low-power consumption of the capacitive encoder outranks both the optical and magnetic solution.

The shielding of the rotary and stationary plates prevents the entry of external signals and contaminants into the mechanically durable encoder housing. This arrangement produces a low-profile, extremely rugged encoder solution. The shaft of the capacitive rotor is flat and hollow. Consequently, there are no bearings to cause wear and tear failures.

Capacitive electric encoders offer performance, reliability, and accuracy not available with traditional encoders. Important differences between optical and capacitive encoders is the optical disk and power dissipation of the latter. The capacitive electric encoder generally consumes 10 mA. It’s more robust, less susceptible to contamination, and consequently achieves a much longer life than optical encoders.

Another benefit of capacitive electric encoders is the ability to change the encoder’s resolution by modifying the line count in the electronics, without changing components. Compared to optical and magnetic encoders, capacitive versions simply provide the combination of high-resolution, ruggedness, and programmable flexibility. 

Conclusion

The encoder is an electromechanical device that converts sensed mechanical motion to a digital signal. The encoder’s output digital signal provides position, velocity, and direction information. Today, encoder applications appear across the board in areas such as automotive, consumer electronics, office equipment, industrial, and medical industries.

The two dominant encoder types in the general industrial market are optical and magnetic. But the relatively new capacitive electric encoder surpasses both of those in terms of high resolution, ruggedness, and programmability.

 Sponsored Resources: 

Related References

The Advantages of Capacitive vs. Optical Encoders PDF

Innovative Rotary Encoders Deliver Durability and Precision without Tradeoffs PDF

Energy-Conscious Sensing for Mobile Motor Drives PDF

Other References

A Designer’s Guide to Encoders

Innovative Rotary Encoders Deliver Durability and Precision without Tradeoffs

Capacitive absolute encoders AMT20 series

Capacitive Sensing: A Paradigm Shift for Encoders (.PDF Download)

$
0
0

Encoders are a vital component in many applications that require motion control and feedback information. Whether a system’s requirement is speed, direction, or distance, an encoder produces control information about the system’s motion.

As the direction of electronics strives toward higher resolution, ruggedness, and lower costs, the encoder sensing mechanism is also improving in these areas. Traditionally, the encoder’s sensing mechanism has been optical or magnetic. But there’s a new player in town: the capacitive electric encoder. We’re going to talk about this new player and how it measures up to the encoding environment’s accuracy and ruggedness requirements. 

1. Different encoder types developed by CUI. (Courtesy of Digi-Key)

Encoder Classes

An encoder is a device that converts information from one format to another. In this article, the encoder is a motion detector with an electrical drive element. There are two different classes of electrical encoders: linear and rotary.

A linear class measures motion along a straight path, providing position, speed, and direction. This encoder has a sensor, transducer, and a location reader. An analog or digital output signal relays the encoder’s position to the system’s receiver. The encoder reads the information and identifies the encoder’s position. The encoder can provide speed or velocity data over time and with two sensors, the determination of direction is possible.    

A rotary class, also called a shaft encoder, is a must-have in a hobby shop for driving motors. This device activates motors and measures rotational motion or angular position. The rotary encoder also has a sensor, transducer, and location reader. The reader relates the current position of the shaft to provide angular data to the user. The encoder also can provide speed or velocity data over time and with two sensors, it’s possible to determine the encoder’s direction. 

Two Different Encoder Categories

The encoder output governs the category of the encoder’s category: incremental or absolute. An incremental encoder provides speed and distance feedback of any linear and rotary system. This category of the encoder device generates a plus train or a square wave to determine position and speed. Typically, the output signal is either zero or the supply voltage.

Achieving Ultimate Fidelity with Advanced Motion Control—An Engineer’s Perspective

$
0
0

AVDesignHaus’ Dereneville Modulaire MK III is considered the best turntable in the world. Find out how its advanced motion control system works.

The Dereneville Modulaire MK III crafted by AVDesignHaus is considered the best turntable in the world (Fig. 1). Although the investment for this fine system might be extremely high, audiophiles love it and will appreciate the excellent value it offers. The complete deck comes with a polished Corian chassis, high-efficiency shock absorbers for enhanced stability, and a heavy, magnetic, contactless bearing turntable.

1. The Dereneville Modulaire MK III turntable with DTT-01 linear tracking tonearm. (Source: AVDesignHaus)

While all mechanical parts are precision, handmade unique pieces, making the turntable almost look like a Department of Defense scientific instrument, smart microelectronics controlling the system's electromechanical components are responsible for the "last mile" of sound quality. Its major characteristic—stunningly pure, clear sound output—is made possible by the new fully automatic Dereneville DTT-01-S active linear tracking tonearm. It’s actuated by stepper motors and electromechanically controlled by the latest generation of smart stepper-motor driver chips developed by TRINAMIC Motion Control.

This article discusses the challenge faced by the engineers while following their goal to design the perfect tonearm for the turntable system, and how a smart piece of motion-control silicon overcame this challenge by totally silencing the system's electromechanical actuators. 

AVDesignHaus's Engineering Challenge

Every analog audio enthusiast's dream is to experience clear high-fidelity sound with perfect trebles and without distortion when listening to their favorite tracks.

But in an analog deck, many factors distort the original track information, such as the lateral tracking angle error of the headshell and the stylus, mechanical vibrations within the whole system, or mechanical stress at the stylus and the record's groove. Higher audible frequencies and the record's inner grooves are especially susceptible to these factors. Since the theory of record operation is based on the smallest micro-dynamic vibrations, even the smallest deviations are absolutely performance-disturbing.

With an increasing lateral tracking angle error, the total harmonic distortion of the sound output increases. This results in, among other things, diminished precision and reduced ability to locate single voices or instruments.

Technically, the goal is to scan the original recorded vinyl track information with the stylus in a 90-degree angle of the groove and in optimal tangential alignment to the center of the record, exactly as it has been cut and pressed during the record's manufacture. And it must be done without adding noise, without misreading information, and with minimal mechanical wear of the components.

While there are nearly perfect solutions to electronically amplify the audio signal once it’s read by the tonearm's stylus, the stylus' physical and electromechanical task of perfectly scanning the vinyl's groove information is the engineer's challenge.

Conventional tonearms suffer from radial mounting and lack active positioning. They passively follow the record's track, leading to skating and lateral forces. In contrast, active linear tracking tonearms actively position the stylus on the groove while maintaining a tangential orientation with regard to the disc's center.

The Dereneville DTT-01-S tonearm relies on a linear drive based on hybrid stepper motors; the headshell itself is gimbal-mounted. Using precision laser optics and an advanced control algorithm, the 90-degree angle of the stylus is permanently captured and maintained. The control unit regulates the movement of the linear drive and the stepper motors accordingly. Stepper motors are ideally suited for this positioning task since they directly hold the commanded rotor position without needing additional feedback and regulation.

A large portion of vibration and resonance within the analog deck are dampened or removed completely by the heavy chassis, air and magnetic suspension, and high-quality bearings. These primarily focus on overall stability and protection from external forces.

Nevertheless, the electromechanical components of the active linear tracking tonearm—small stepper motors—induce vibration directly at the fixture of the tonearm, which passes on to the headshell and the stylus. This adds noise, makes the headshell jolt, and reduces audio signal quality.

But where do these additional vibrations come from when using stepper motors?

Where Does the Noise Come From?

Stepper motors are widely used in nearly all kinds of moving applications in automation, digital manufacturing, and medical and optical appliances.

Their advantages are comparatively low cost, high torque at rest and at low speeds without using a gearbox, and inherent suitability for positioning tasks. Stepper motors don’t necessarily require complex control algorithms or position feedback to be commutated, in contrast to three-phase brushless motors and servo drives.

Their downside has been high noise levels, even at low speeds or at rest. Stepper motors have two major sources of vibrations: step resolution, and side effects that result from chopper and pulse-width-modulation (PWM) modes.

Step Resolution

Low-resolution step modes like full- or half-stepping are the primary source of noise. They introduce tremendous vibrations that spread throughout the whole mechanics of a system, especially at low speeds and near specific resonance frequencies. At higher speeds, these effects decrease due to the moment of inertia.

2. The plot shows reduction of motor vibrations when switching from full stepping to high microstep resolutions. (Source: TRINAMIC Motion Control)

With modern control processes and careful layout, these motors can operate virtually silently. The best way to reduce acoustic emissions is to reduce resonance and mechanical vibrations, by increasing step resolution using smart drivers. Smaller steps, called "microstepping," smooth motor operations, greatly reducing resonance (Fig. 2).

Microstepping divides one full step into smaller pieces, or microsteps. Typical resolutions are 2 (half-stepping), 4 (quarter-stepping), 8, 32, or more microsteps. The maximum resolution for microstepping is defined by the driver electronics' analog-to-digital and digital-to-analog capabilities.

Chopper and PWM Modes

Another source of noise and vibration originates from the parasitic effects of conventional chopper and PWM modes typically used with stepper motors. These effects are often neglected due to the dominant impact of coarse step resolution. But with microstepping, these effects become apparent and even audible.

The classic constant off-time PWM chopper mode is a current-controlled PWM chopper that works with a fixed relation between fast decay and slow decay phases. At its maximum point, the current reaches the specified target current, resulting in an average current that is lower than the desired target current.

In a full electrical revolution, this results in a plateau around the zero-crossing area of the sine wave when the sign (direction) of the current changes. What’s the impact of this plateau? Actually, there’s a small period with zero current in the motor windings, meaning there’s no torque at all, which leads to a wobbling behavior and vibrations, especially at lower speeds.

Going a Step Further: Totally Silencing Stepper Motors

The TMC5130A-TA—a small, smart stepper-motor driver and controller IC that includes StealthChop mode—was the ultimate solution for this remarkable analog record player.

Trinamic's stepper-motor controller and drivers allow a stepper motor with up to 256 (8-bit) microsteps per full step. Using such a high microstep resolution means the motor's rotor is now stepped in much smaller angles, or smaller distances, reducing oscillation and thus noise.

While microstepping reduces a large part of the vibrations caused by low step resolutions, other sources of vibrations become more perceivable when using high microstep resolutions. Advanced current-controlled PWM chopper modes like Trinamic's SpreadCycle, for example, already reduce vibrations and wobbling to a large extent, sufficient for many standard applications and well-suited for higher-speed applications.

Nevertheless, even with current-controlled chopper modes like SpreadCycle, which is implemented in hardware, there's still some audible noise and vibration due to unsynchronized motor coils, regulation noise of a few millivolts at the sense resistors, and PWM jitter. This can be critical for high-end applications, slow- to moderate-speed applications, and anywhere noise is unacceptable

For the Dereneville DTT-01-S linear tracking tonearm it was intolerable, because the noise coming from the microstepping drive and hybrid stepper is superimposed on the audio signal, especially within the plain grooves at the transition between individual tracks.

Without introducing additional vibrations, Trinamic's StealthChop algorithm, also implemented in hardware, ultimately silences stepper motors by following a different approach compared to current-based chopper modes like SpreadCycle. It’s responsible for the noiseless, smooth movement of the tonearm and stylus.

When combining that with closed-loop tracking angle regulation and precision laser optics, it results in a maximum tracking angle error for headshell plus stylus of <0.05 degree. A good conventional pivoted tonearm has a typical tracking angle error of <2-3 degrees and furthermore suffers from skating forces and mechanical wear of the groove.

StealthChop's technology is voltage chopper-based. Trinamic has improved voltage-mode operation and combined it with current control. Based on the current feedback, the TMC5130A-TA chip's driver regulates voltage modulation to minimize current fluctuation. As a result, the system can self-adjust to the motor's parameters and the operating voltage. Small oscillations caused by the regulation algorithms of direct-current control loops are eliminated.

Motor drivers equipped with StealthChop combine current waveforms that closely replicate analog with minor improvements in power consumption at no additional cost. The result is whisper-quiet motion.

StealthChop delivers exceptionally quiet stepper-motor performance, except for the noise from ball bearings, which can’t be changed. StealthChop applications have achieved noise levels of 10 dB and more below classical current control. As we know from physics, a change of −3 dB represents a decrease of approximately half the noise or sound level.

Alternative Technology Solutions

To solve their tonearm noise problem, the AVDesignHaus engineers first looked at mechanical solutions such as different types of bearings and different methods for mounting the tangential tonearm. When none of these worked, they decided to redesign the tonearm in search of absolute quiet, trying several different drive concepts: a linear drive, a toothed-belt drive, and a trapezoidal spindle with a polymer nut. Each brought small improvements, but not enough for the quiet they sought.

On the electrical side, they started with Trinamic's TMC260-PA stepper-motor driver using 256 microsteps and its current-based chopper mode, SpreadCycle. It didn't solve the challenge. The engineers then assumed that, instead of the 200 full-step motor they began with, driving an 800 full-step stepper motor would be sufficient. However, the vibrations produced by the motor's stepping were still too strong and became audible in the audio output signal.

Finally, they went back to the 200 full-step motor using 256 microstepping in combination with StealthChop, implemented in the TMC5130A-TA stepper-motor driver and controller.

Conclusion

The new fully automatic tangential tonearm Dereneville DTT-01-S as used in the remarkable Dereneville Modulaire MK III analog turntable truly is the first of its kind. It redefines the standard in the analog hi-fi world.

3. Here’s the final controller PCB for the electromechanical actuators of the tangential tonearm DTT-01-S, along with Trinamic's smart stepper-motor driver solutions. (Source: AVDesignHaus)

The tonearm relies on the TMC5130A-TA stepper-motor driver and controller (Fig. 3). This smart IC with StealthChop mode for ultra-silent stepper motor operation—no audible noise and no physical vibrations—adds the necessary final touch to this perfect piece of engineering, and is responsible for the pure sound output audiophiles love to hear.

While the manufacturing volumes might be relatively low in terms of the IC business, many comparable applications can take advantage of this technology. One example is wafer handling in semiconductor manufacturing equipment, medical applications, and lab automation. These share similar quality requirements for low noise and vibration.

Dr. Stephan Kubisch is Head of R&D at TRINAMIC Motion Control GmbH & Co. KG.

A Teardown of the Trek Sensor Bicycle Speedometer

$
0
0

When six of the same type speedometer all stopped functioning, it meant teardown time to pinpoint the cause as well as come up with a remedy to get them back into working condition.

I use bicycle speedometers on my five old Harley Sportsters. They’re smaller, lighter, and far more accurate than a mechanical speedometer. I tried several brands over the years, but have found an older Trek Sensor model to be the best. It has a regular and a trip odometer, with an understandable display. Best yet, it’s fairly intuitive to program and reset.

A Trek speedometer on my bike stopped working when the left pushbutton stopped functioning. This made it impossible to reset the trip odometer to zero miles, as well as allow me to change the display to the various functions.

I was not too concerned since I have six spare units. To my horror, when I tried to program any of the other speedos, I found that none of the buttons would work on any of the units. It turned out that the designers made their own switches by taping a Belleville washer cupped spring over gold-plated pads on the printed circuit board (PCB). After a decade or two, the adhesive from the tape had leached under the washers and prevented electrical contact. I could understand this happening on the units mounted on my motorcycles, since they were in my hot Florida garage for a few years. Yet even the spare units that had been inside the house had this same failure mode.

Every time I see system engineers regard a switch or connector as trivial and design their own, things go wrong. I once worked for a Palm Pilot accessory startup. They disliked the terms of the contract for the docking connector. The company asked that we limit any order to less than 45,000 per month. Heck, we never sold 100 of them, as the project eventually failed.

But a co-worker thought it would be better to design our own connector using springs and ball bearings to make contact with the edge-card connector in the Palm Pilot. I convinced him that contact design is not trivial and takes long experience and lots of testing. Thankfully, I prevailed and we just bought the real docking connector from the reputable manufacturer.

When it comes to switches and connectors, leave it to the experts at Omron and NKK and CK, as well as Tyco and Molex and Amphenol, to name just a few. This speedometer is a case in point. I tore down one unit, and was delighted to find I could fix the switches with solvent and Scotch tape, as seen below. I fixed all six spare units, and have them in reserve to replace any bad speedometers on my motorcycles.

The Trek Sensor bicycle speedometer is an older model that works fine on motorcycles. They came in opaque and clear cases. The very early model was called a Trek Sonic.

The backside of the Trek Sensor speedometer has a compartment for the battery as well as two molded-in contacts for the mounting shoe that connects to the magnetic pickup. You mount a magnet on the wheel and the speedo senses each full revolution. By programming in the circumference of the wheel in centimeters, you get a very accurate speedometer.

The case halves snap together. There’s an O-ring on the top half that provides water resistance. The problematic switches are the two silver domes on the gold-plated pads on the PCB. You can just make out the little squares of clear tape holding the domes to the circuit board.

At the left is the O-ring. Above the top case-half is the black plastic placard that has the logo and model name. To the right is the LCD module, with a plastic holder and zebra-strip contacts to mate with the PCB. At the bottom is the rubber pushbutton surface and two black plastic retainers.

The back of the circuit board has the battery holder and another one of those infernal home-brew switches for a reset function. The contacts for the magnetic sensor are directly attached to the PCB. The large capacitor provides hold-up, so if you change the battery swiftly, you won’t lose the programming. This particular unit was the only one with green corrosion on the center battery terminal. I cleaned it with a small wire brush and applied DeoxIT contact treatment.

The top side of the PCB has the programming and mode switches, here removed so you can see the tape that held the spring washers to the PCB. The microcontroller is a chip-on-board (COB) unit, with the watch crystal at the top left along with a handful of capacitors as well as one resistor.

I cleaned the domes and circuit board with Goof Off, the adhesive cleaner. Then I put a small drop of DeoxIT Gold on the gold PCB pads. I re-secured the domes with plain old Scotch tape. All six units worked perfectly after this procedure.

Once the switches worked, I could initialize the speedometer with wheel size, date, time, mph vs. kph, and the odometer setting. That’s one of the things I love about the Trek Sensor—you can program in the odometer setting after a reset or battery failure so that you don’t have to start at zero every time. I connected the magnetic sensor and waved the wheel magnet near it to verify the speedometer was working. I used a Dykem white paint marker to mark the back of the units as I fixed them.

Even a simple job requires a plethora of tools. The Goof Off and DeoxIT did the work, but I needed Q-tips to scrub and apply them. An X-acto knife and scissors cut the Scotch tape. Tweezers were essential to peel the old tape off, as well as position the domes and new tape. I used the knife to pry the case-halves apart. I tried the eraser pen to clean the gold contacts, but it wasn’t really needed.

To fix anything, you need to see it. I used my 3M Nuvo safety/reading glasses, as well as a machinist magnifier, to see what I was doing. The safety glasses protect your eyes any time you have spray cans. No matter how careful you are, they always spritz in your eye.

Not only was this fix 100% successful, I found a nice Maxell button battery cross-reference guide. The speedometer manual specified an alkaline L43 battery cell. That cell has 55-mAh capacity. The cross-reference chart shows the silver-oxide SR43W to be the exact same size, but with a capacity of 125 mAh, more than twice the LR43. I found a 5-pack on Amazon in the equivalent 386 size; much cheaper than the 4 bucks each they wanted at the drug store. Heck, I only paid 10 bucks for most of the speedometers on eBay. With all of these spares, my plan is to have two speedometers on each of my five motorcycles, for redundancy.


The Evolution of the Instrumentation Amplifier (.PDF Download)

$
0
0

In the past, the term instrumentation amplifier (INA) was often misused, referring to the application rather than the device’s architecture. INAs are related to op amps, in that they’re based on the same architecture, but an INA is a specialized version of an op amp. INAs are specifically designed and used for their high differential gain to amplify microvolt-level sensor signals while simultaneously rejecting high-common-mode signals that can be several volts. This is important since some sensors produce a relatively small change in voltage or current, and this small change must be accurately captured.

Let’s consider a few applications that benefit from INAs. For example, a medical instrument that uses sensors to align laser stepper motors for vision-correction eye surgery. High accuracy is crucial, and other equipment in the operating room can’t be allowed to compromise the sensor signals and cause unexpected results.

Another example is a factory press. These machines apply thousands of pounds of force to bend metal into shapes. Using sensors, these machines are designed to stop if it detects a human hand. In this example, it’s critical that electrical noise from other factory equipment doesn’t cause interference that could lead to a malfunction.

In both cases above, the first step in the journey of the sensor signal is through an instrumentation amplifier. The tiny sensor signals must be accurately amplified in all environments. Instrumentation amplifiers are specially designed to do exactly that—to accurately amplify small signals resulting in high gain accuracy in an electrically noisy environment.

Other considerations further enhance the performance of an INA. Low power consumption is important to extend battery life. A low operating voltage allows the battery to be used over more of its depletion curve, extending battery life. A wide input voltage range allows compatibility with more sensors. And impedance matching at the input contributes to the seamless interface to sensors.

How INA Designs Have Evolved

With an endless number of consumer, medical, and industrial applications, designs have evolved over the years to take advantage of the performance benefits offered by INAs. Let’s look at the evolution of INA designs, from the original approaches to the instrumentation amplifiers available today. By reviewing these architectures and their associated strengths and limitations, this article shows the performance improvements seen in present-day instrumentation amplifiers along with real-life applications.

The Evolution of the Instrumentation Amplifier

$
0
0

Whether it’s a vision-correction medical instrument or factory press, INAs offer an excellent way to amplify microvolt-level sensor signals while simultaneously rejecting high common-mode signals.

Download this article in PDF format.

In the past, the term instrumentation amplifier (INA) was often misused, referring to the application rather than the device’s architecture. INAs are related to op amps, in that they’re based on the same architecture, but an INA is a specialized version of an op amp. INAs are specifically designed and used for their high differential gain to amplify microvolt-level sensor signals while simultaneously rejecting high-common-mode signals that can be several volts. This is important since some sensors produce a relatively small change in voltage or current, and this small change must be accurately captured.

Let’s consider a few applications that benefit from INAs. For example, a medical instrument that uses sensors to align laser stepper motors for vision-correction eye surgery. High accuracy is crucial, and other equipment in the operating room can’t be allowed to compromise the sensor signals and cause unexpected results.

Another example is a factory press. These machines apply thousands of pounds of force to bend metal into shapes. Using sensors, these machines are designed to stop if it detects a human hand. In this example, it’s critical that electrical noise from other factory equipment doesn’t cause interference that could lead to a malfunction.

In both cases above, the first step in the journey of the sensor signal is through an instrumentation amplifier. The tiny sensor signals must be accurately amplified in all environments. Instrumentation amplifiers are specially designed to do exactly that—to accurately amplify small signals resulting in high gain accuracy in an electrically noisy environment.

Other considerations further enhance the performance of an INA. Low power consumption is important to extend battery life. A low operating voltage allows the battery to be used over more of its depletion curve, extending battery life. A wide input voltage range allows compatibility with more sensors. And impedance matching at the input contributes to the seamless interface to sensors.

How INA Designs Have Evolved

With an endless number of consumer, medical, and industrial applications, designs have evolved over the years to take advantage of the performance benefits offered by INAs. Let’s look at the evolution of INA designs, from the original approaches to the instrumentation amplifiers available today. By reviewing these architectures and their associated strengths and limitations, this article shows the performance improvements seen in present-day instrumentation amplifiers along with real-life applications.

Before delving into the different approaches, let’s first look at what we’re trying to accomplish, using the diagram in Figure 1.

1. Block diagram of sensor interface to INA.

The sensor outputs are connected to the INA inputs that amplify the differential voltage. Noise comes from many sources, both radiated and conducted. Typical noise may derive from switching power supplies, motors, and wireless devices. Such noise is reduced by shielding and good PCB layout practices; however, some noise will get through.

Fortunately, most of that noise shows up as an in-phase, common-mode voltage (VCM) superimposed on the differential input sensor voltage (VDM), and a properly designed instrumentation with good common-mode rejection ratio (CMRR) will greatly reduce this voltage to maintain gain accuracy. A minimum CMRR is typically specified at dc, while the ac CMRR performance is documented in performance curves.

The Discrete Difference Amplifier

If you want to amplify the voltage difference across the sensor output, a simple difference amplifier works, but it has many downsides. In the simple implementation shown in Figure 2, VIN+ is biased to VREF (typically half the supply voltage) for single-supply applications.

2. Discrete difference amplifier.

Designed to amplify differential voltages, the operational amplifier itself provides good CMRR, but this is overwhelmed by the circuit surrounding it. Any mismatch in the external resistors (including mismatch contributed by any divider network connected to VREF) limits the ability of the op amp to reject common-mode signals, resulting in reduced CMRR. Resistor tolerances are simply not tight enough to maintain a good CMRR that would be expected from an INA. We can see how much the resistor mismatch affects CMRR using the equations below.

The equation below uses a difference amplifier with G = 1V/V, and TR is the resistor tolerance:

  • If TR = 1%, worst-case dc CMRRDIFF will be 34 dB
  • If TR = 0.1%, worst-case dc CMRRDIFF will be 54 dB

where ‘K’ is the net matching tolerance of R1/R2 to R3/R4, and K can be as high as 4TR (worst case):

The amplifier amplifies the differential voltage at the input, and the gain of the amplifier is:

VOUT = G * VDM

         = (R1/R2) * (VIN+− VIN-) + VREF

The problem is that the differential voltage (VIN- and VIN+) includes superimposed noise, and any common-mode voltage that’s not rejected (due to poor CMRR) will be amplified, resulting in an output being corrupted by noise.

This simple approach also has other drawbacks. Typically the input impedance of an operational amplifier is high (MΩ to GΩ range), However, because of the feedback path and reference, the impedance is both reduced and imbalanced, thus loading the sensor and adding to inaccuracies. While this circuit will amplify a small signal sensor, the poor gain accuracy in the presence of noise would not be useful for instrumentation purposes.

The Three Op-Amp IC Approach

This is a common INA packaged in a single integrated circuit (IC) (Fig. 3). The circuit is divided into two stages: The input stage has two inverting buffer amplifiers, and the output stage is a traditional difference amplifier. The internal resistors used throughout this design are matched to a very close tolerance only possible with trimmed resistor semiconductor designs, resulting in a much higher CMRR.

3. Three-operational-amplifier IC approach.

The input-stage amplifiers also provide high impedance, which minimizes loading of the sensors. The gain-setting resistor (RG) allows the designer to select any gain within the operating region of the device (typically 1 V/V to 1000 V/V).

The output stage is a traditional difference amplifier. The ratio of internal resistors, R2/R1, sets the gain of the internal difference amplifier, which is typically G = 1 V/V for most instrumentation amplifiers (the overall gain is driven by the amplifier in the first stage). The balanced signal paths from the input to the output yield excellent CMRR.

The design is simple to implement, has a small footprint, and fewer components, leading to lower system costs. The design is also compatible with single-source supplies using the VREF pin. However, even with this design, there are limitations to consider.

Three op-amp INAs achieve high CMRR at dc by matching the on-chip resistors of the difference amplifier, but the feedback architecture can substantially degrade the ac CMRR. In addition, parasitic capacitances can’t be completely matched, causing mismatches and reduced CMRR over frequency. The common-mode voltage input range is limited so that internal nodes don’t saturate. The VREF pin requires a buffer amplifier for optimal performance. Lastly, the temperature coefficient of the external and internal gain resistors aren’t matched, which contributes to a decline in CMRR.

Mathematically, the gain accuracy depends on resistor matching:

VOUT = (G × VDM) + VREF

where:

G = 1 + [1 + (2RF/RG)](R2/R1)

VDM = (VIN+ − VIN-)

The Indirect Current Feedback Approach

The indirect current feedback (ICF) INA uses a novel voltage-to-current conversion approach (Fig. 4). It’s comprised of two matched transconductance amplifiers, GM1 and GM2, and a high-gain transimpedance amplifier (A3). The design doesn’t rely on balanced resistors, so internally trimmed resistors aren’t required, thereby reducing manufacturing cost. Another advantage is that the external resistors needn’t match any on-chip resistors. Only the RF and RG external resistors temperature coefficients need to match as closely as possible for minimal gain drift.

4. Indirect current feedback approach.

The dc CMRR is high since amplifier GM1 rejects common-mode signals. And ac CMRR doesn’t decrease significantly with increased frequency. It was mentioned that the three-op-amp approach input range is limited to prevent internal node saturation. With an ICF, the output voltage swing isn’t coupled to the input common-mode voltage, resulting in an expanded range of operation not possible with the three-op-amp architecture.

The second stage (GM2 and A3) differentially amplifies the signal and further rejects common-mode noise on VFG and VREF. Single-supply operation can still be used by applying a bias to VREF.

The ICF INA gain is:

VOUT = (G × VDM) + VREF

where:

VDM is the differential-mode voltage:

VDM = (VIN+ − VIN-)

        = (VFG− VREF)

Figure 5 shows a few typical applications for an INA. A variety of sensors are shown that are accurately amplified by an INA feeding a converter and microcontroller.

5. Shown are examples of a typical circuit using an INA with a sensor.

Conclusion

The need to amplify small signals in the presence of noise has gone through an evolution over the years. The simplest approach, the discrete operational amplifier, isn’t suitable as an INA. The integrated three-op-amp approach has significant advantages, including high dc CMRR, and balanced and high input impedances with one gain resistor. However, there are common-mode voltage limitations and it’s difficult to match internal versus external resistor temperature coefficients, resulting in gain drift. The impedance at the VREF pin can also negatively impact CMRR unless a buffer is used.

The ICF approach also has a high CMRR (even at higher frequencies), a wider common-mode voltage range and no on-chip trimmed resistors resulting in lower cost and low temperature-coefficient gain drift.

Greg Davis is a Sr. Product Marketing Manager for Microchip Technology’s Mixed Signal Linear Products Division.

The Active Clamp Flyback Converter: A Design Whose Time Has Come (.PDF Download)

$
0
0

One trend that shows no size of ending is the move to smaller personal electronic devices. From smartphones to tablets, each successive generation is smaller and more powerful than the last. It’s usually battery-powered and must be regularly charged by an external charger or adapter.

The movement to more powerful processors and larger screens poses a challenge for the power designer, who must continually come up with new ways to boost efficiency and reduce battery-charger size. Stringent new standards such as DoE Level VI and EU CoC V5 Tier-2 raise the efficiency requirements on a wide range of power adapters.

As a result, designers must improve the design of the discontinuous-mode quasi-resonant flyback topology. Such a converter is traditionally used to implement a low-cost ac-to-dc power adapter for power levels up to 100 W.

A flyback converter uses relatively few components, is simple to design, and can accommodate multiple outputs. With a silicon power MOSFET, its efficiency can run as high as 90% if synchronous rectification is employed and the switching frequency is kept below 100 kHz to reduce switching losses.

1. The flyback topology is simple and low cost, but high voltage stress on the switching transistor limits its use in lower-power (<100 W) applications. (Source: TI Training: “Understanding the Basics of a Flyback Converter”)

Figure 1 shows the essential elements of the flyback design. The transformer acts as a coupled inductor rather than as a true transformer. When the power switch is on, the flyback converter stores energy in the primary-side inductor. During the switch off time, the energy transfers to the secondary and from there to the output. Current flows in either the primary or secondary winding, but not both at the same time.

The Active Clamp Flyback Converter: A Design Whose Time Has Come

$
0
0

Sponsored by Texas Instruments: Though long recognized as a superior option for small power adapters, implementation difficulties have held back the topology’s widespread use. Integrated ACF controllers are now rewriting that script.

Download this article in PDF format.

One trend that shows no size of ending is the move to smaller personal electronic devices. From smartphones to tablets, each successive generation is smaller and more powerful than the last. It’s usually battery-powered and must be regularly charged by an external charger or adapter.

The movement to more powerful processors and larger screens poses a challenge for the power designer, who must continually come up with new ways to boost efficiency and reduce battery-charger size. Stringent new standards such as DoE Level VI and EU CoC V5 Tier-2 raise the efficiency requirements on a wide range of power adapters.

As a result, designers must improve the design of the discontinuous-mode quasi-resonant flyback topology. Such a converter is traditionally used to implement a low-cost ac-to-dc power adapter for power levels up to 100 W.

A flyback converter uses relatively few components, is simple to design, and can accommodate multiple outputs. With a silicon power MOSFET, its efficiency can run as high as 90% if synchronous rectification is employed and the switching frequency is kept below 100 kHz to reduce switching losses.

 Sponsored Resources: 

1. The flyback topology is simple and low cost, but high voltage stress on the switching transistor limits its use in lower-power (<100 W) applications. (Source: TI Training: “Understanding the Basics of a Flyback Converter”)

Figure 1 shows the essential elements of the flyback design. The transformer acts as a coupled inductor rather than as a true transformer. When the power switch is on, the flyback converter stores energy in the primary-side inductor. During the switch off time, the energy transfers to the secondary and from there to the output. Current flows in either the primary or secondary winding, but not both at the same time.

The simplest mode of operation is discontinuous mode (DCM). The power stage is designed to allow the transformer to demagnetize completely during each switching cycle. The most basic DCM control scheme switches at a fixed frequency and modulates the peak current to support load variations.

Stressed Out Transistor

One of the issues with the design shown in Fig. 1 is that the flyback topology imposes a high level of stress on the low-side switching transistor Q1 when it turns off. Let’s analyze this in a little more detail.

As shown in Figure 2, we can model the flyback transformer as a leakage inductance (LLK), a primary-side magnetizing inductance (LPM), and an ideal transformer. The leakage inductance is effectively in series with Q1. When Q1 switches off, the current through the LLK and LPM is interrupted. The energy stored in LPM transfers to the secondary winding and the output, but the leakage inductance energy causes a large voltage spike that stresses the power switch.

2. Dissipating the energy from the leakage inductance requires a passive or active clamp circuit. (Source: TI Training: “What is active clamp flyback?”)

Adding a clamp circuit (also called a snubber) on the primary side of the transformer provides a path for the leakage inductance current. The clamp design must meet several goals: it must limit the stress on Q1 to an acceptable level; it must discharge the leakage inductance quickly with minimum losses; and it must not degrade the overall loop dynamics.

Fig. 2 shows two options for the clamp. The passive clamp uses a Zener diode and a blocking diode in series. When Q1 is on, the blocking diode is reverse-biased and no current flows through the clamp. When Q1 turns off, the voltage spike turns on the diode, and the circuit clamps the drain voltage to (VIN + VZ + VD). 

This method is simple and relatively inexpensive, but the passive clamp circuit lowers the system efficiency because it dissipates the leakage inductor energy as heat. The power loss increases with switching frequency according to the passive-clamp power dissipation equation:

where VCLAMP is the voltage across the clamp when Q1 is off; NP/NS is the transformer turns ratio; and IP is the transformer peak primary current.

Once the leakage energy is depleted, the output diode continues to conduct until the magnetizing current has decreased to zero. After the output diode turns off, the residual energy in the system causes a resonance between the magnetizing inductance LM and the switch-node capacitance CSW.

3. The QRF uses the first resonant valley to trigger the next switching cycle. (Source: Texas Instruments)

The quasi-resonant (QR) mode of operation (Fig. 3), also referred to as critical conduction mode (CCM) or transition mode (TM), takes advantage of this resonance to reduce losses. The QR flyback (QRF) controller detects the first resonant valley—when VSW is at a minimum—and uses the event to control the turn-on of the MOSFET for the next cycle. This technique is known as valley switching (VS).

QR is a popular choice for low-power flyback converters and delivers the maximum amount of power with the passive-clamp configuration. Since VSW is not zero, though, there’s still a switching power loss PSW(QRF), given by the equation:

Adding ZVS Eliminates Switching Loss

The active clamp circuit solves this issue. This design replaces the two diodes with a high-voltage FET (QC in Fig. 2) in series with a clamping capacitor. The FET can be either p-channel or n-channel. The p-channel device is simpler to control, but fewer choices are available at high voltages. For offline use, therefore, an n-channel FET is preferred, even though it requires a high-side driver.

Instead of wasting the leakage inductance energy, the active clamp circuit improves efficiency by storing the energy in a capacitor, then delivering it to the output later in the switching cycle. Combining TM operation with an active clamp can completely remove the switching loss, allowing the switching frequency of an active clamp flyback to be higher, thus reducing the size of the power supply (Fig. 4).

4. A comparison of three clamp strategies shows successively lower switching voltage as we progress from left to right. (Source: TI Training: “Comparison of GaN and Silicon FET-Based Active Clamp Flyback Converters - Part 1”)

During the demagnetization time of the transformer, the high-side switch remains on and the clamp capacitor resonates with the leakage inductor. This allows the active clamp to recycle the leakage energy to the output.

By keeping the high-side switch on, the magnetizing current Im is allowed to ramp all the way down to zero and past zero to the reverse direction; QC then turns off with a little bit of negative current flowing in it. The negative current discharges the junction capacitance of the switch node and allows the low-side switch to turn on with zero volts across it after a short delay. Thus, the active clamp flyback (ACF) operating in transition mode can also eliminate the switching loss.  This technique is called zero-voltage switching (ZVS).

The ACF has two slight drawbacks, though. Building up the additional negative current increases the flux density, so the active clamp core loss is slightly higher than that of the passive clamp. Also, the clamp current flows in the transformer primary winding during demagnetization time. The larger primary current increases the total winding loss; a large amount of negative current can negate the efficiency gains of the ACF if the switch-node capacitance becomes too large.

The table compares the performance of the three topologies in four categories: clamp loss, switching loss, core loss and winding loss.

Improving the efficiency with an active clamp pays dividends in power density. Switching at 150-260 kHz, a 65-W passive clamp QRF can achieve a power density of around 11W/in.3; the equivalent active clamp flyback design switching at 120-165 kHz can achieve around 14W/in.3.

GaN Switching: The Icing on the Cake

Changing the silicon power devices to transistors based on gallium nitride (GaN) yields further efficiency improvements. Compared to a silicon MOSFET, a GaN device has lower on-resistance, higher breakdown voltage, better reverse-recovery characteristics, and can operate at higher temperatures. It also has much lower switching losses; therefore, it can operate at higher switching frequencies.

Higher switching frequencies allow for the use of smaller capacitors, inductors, and transformers, which in turn reduce power converter size, weight, and cost. Switching to GaN can cut the size of the adapter by up to 50%.

The TI Power Supply Design Seminar for 2018 includes a set of training videos with a detailed analysis of the active-clamp flyback circuit and a comparison of Si and GaN implementations.

Practical Implementation of the ACF with the UCC28780

Reliably controlling the active clamp high-side switch isn’t easy, but new devices such as the Texas Instruments UCC28780 simplify the task considerably. The UCC28780 is a high-frequency active-clamp flyback controller that monitors VSW and adjusts the on-time of QC for zero-voltage switching.

The device works with VDS-sensing synchronous rectifier controllers for higher conversion efficiency and a more compact design. A user-programmable advanced control law feature allows performance to be optimized for both Si and GaN primary-stage power FETs, enabling compliance with efficiency standards such as DOE Level 6 and COC Tier 2.

5. The UCC28780 simplifies the design of an ACF converter using either Si or GaN power FETs (Source: “UCC28780 High Frequency Active Clamp Flyback Controller” PDF, p. 37)

Figure 5 shows the UCC2870 in a typical off-line application. Tying the SET pin to either ground or REF allows the designer to select a GaN-based or a Si-based half-bridge power stage, respectively.

On the secondary side, Texas Instruments' new synchronous rectifier controller, the UCC24612, is specifically designed to work with the UCC28780, allowing the rectifier diode to be replaced with a more efficient FET.

Other advanced UCC28780 features include operation up to 1 MHz; adaptive ZVS with auto-tuning to compensate for component variations; active ripple compensation; adaptive burst mode to improve efficiency with light or medium loads; and programmable over-power protection providing consistent power for thermal design across a wide line range.

Figure 6 shows how a GaN-based UCC2870 design supplying 45 W exceeds the requirements of the new efficiency standards.

6. A 45-W, 20-V ACF design based on the UCC28780 and a GaN power stage comfortably exceeds the new efficiency standards. (Source: “UCC28780 High Frequency Active Clamp Flyback Controller” PDF, p. 1)

Texas Instruments has also developed a fully-tested ac/dc ACF power adapter reference design that delivers 65 W. The design combines the UCC28780, the UCC24612, and the TPS25740B, TI’s USB Type-C and USB PD source controller, to meet the USB Power Delivery (USB PD) version 2.0 standards. Operating at 600 kHz, the design achieves a peak efficiency of 92%. Measuring just 62 × 26.8 × 20 mm, its power density of 30 W/in.3 is much higher than traditional solutions.

In addition to those already mentioned, the block diagram (Fig. 7) shows the other key components. The main switching devices are NV6115 and NV6117 GaN devices from Navitas Semiconductor. Other devices include the ISO7710 reinforced single-channel digital isolator; the CSD17578Q3A 30 V N-Channel NexFET Power MOSFET;  the ATL431 2.5V Low-Iq Adjustable Precision Shunt Regulator; and the TPD1E05U06 ultra-low-capacitance IEC ESD protection diode.

7. Shown is a block diagram of a USB PD-compliant 65-W adapter with 92% efficiency. (Source: TI Designs: TIDA-01622 PDF)

The design is suitable for a wide range of laptop adapters, smartphone chargers, and other consumer and industrial ac/dc applications.

Conclusion

The active-clamp flyback topology has long been recognized as a superior solution for small high-efficiency power adapters, but difficulties in implementation have prevented its widespread use. Increasing demands for compact, high-efficiency solutions have led to the development of integrated ACF controllers such as the UCC28780, which allow power designers to increase power density and meet new regulatory benchmarks.

 Sponsored Resources: 

The Future of High-Reliability Electronics

$
0
0

The electronics industry has grown tremendously over the last 50 years, and performance and cost advances have enabled capabilities beyond what anyone could have imagined in the early semiconductor days.

The integrated circuit (IC), which recently turned 50 years old, is unquestionably one of the greatest inventions of all time. This technology has transformed our world and is incredibly ubiquitous in our civilization. However, ICs have vulnerabilities that can affect long-term reliability, leading to premature failure of the systems that employ them.

These vulnerabilities are especially critical to the defense industry, since most equipment used by the military contains ICs. But the military is only one user that requires long-term reliability. Industrial control systems such as programmable logic controllers (PLCs) are expected to operate in harsh environments and run uninterrupted for decades. These systems run our infrastructure and provide energy, clean water, and sanitation, as well as goods and services to millions of people.

The automotive industry has used ICs in cars almost as long as they’ve been available, in engine controllers, anti-lock braking systems, chassis controls, transmissions, and driver comfort and assistance systems. Many of these systems (such as air bags) are critical to occupant safety, so reliability is paramount. However, when automobile manufacturers produce millions of cars in a year, the economic impact of a recall due to the premature failure of a component could be devastating.

There are many excellent reasons to pursue reliable ICs. But why do they fail? What can be done to improve reliability, and at what cost?

Why Do ICs Fail?

The process of making ICs sheds some light on the factors that can cause premature failures. The majority of ICs start with a base wafer of pure silicon, which is fabricated through various methods (Czochralski pulling, horizontal gradient freeze, etc.). The purity of the crystal structure is extremely critical to the performance of the circuits built on top, so an epitaxial layer of pure silicon is often deposited on the wafer.

Throughout the process of building layers, which deposit impurities (doping, for example) that change the electrical properties and implanting ions, heat is used to either diffuse atoms or anneal the wafer to fix dislocations in the crystal. Once completed, all of these impurities, oxides, and metal traces need to stay in place. However, thermal motion can move them, as it did during fabrication. Elevated temperatures will degrade ICs for this reason and are part of the equation for reliability.

Another phenomenon is metal migration, often referred to as electromigration. This is the movement of metal conductors due to the electron wind caused by current flow. Devices with high current densities are more susceptible to this problem. As IC geometries shrink, the current densities in the conductive traces increase as well.  This increased current density can lead to metal migration, causing short or open connections depending on where in the circuit the metal moves—and how much.

Modern computer tools used for the design of ICs take this effect into account and can correctly size and space conductors to minimize this type of failure. The effect is real, though, and continues for as long as the circuit is in operation.

To estimate the mean time to failure (MTTF) of an IC due to electromigration, James Black of Motorola developed an empirical model in 1969 now referred to as Black’s equation:

where A is a constant, j is the current density, n is the model parameter, Q is the activation energy, k is Boltzmann’s constant and T is temperature.

Black’s equation is not a physical model, but an abstract construct that can estimate the MTTF due to electromigration (based on empirical data acquired at elevated temperatures), which is then applied at the nominal operating temperature. It’s flexible enough to take into account materials and electrical stress (current density), but doesn’t account for other failure mechanisms. Due to this limitation, high MTTF values are suspect and can’t accurately predict failure rates, but they do provide some indication of electromigration failures.

A strange issue that has plagued the electronics industry since the very beginning is tin whisker growth. Tin (atomic symbol Sn) has been used as a solder and interconnect coating for electronic components since the days of electron tubes. The use of pure (or almost pure) tin caused small whisker-like hairs of metal to grow perpendicular to the surface and could extend out many millimeters, easily reaching an adjacent connection. This phenomenon would occur even in the absence of electric fields or while in long-term storage.

Adding a very small percentage of lead (Pb) to the solder significantly slowed down the growth process. But given the push to remove lead from consumer electronics, this issue is back in focus for both the military and companies that produce lead-free solders.

ICs are also susceptible to various types of ionizing radiation, such as gamma or X-rays that strip electrons from the atoms of semiconductor material (ionization) or high-speed particles (or subatomic particles such as neutrons) with enough energy to displace the atoms themselves. This effect can occur anywhere radiation is present. However, it increases with altitude due to either a loss of atmosphere (less atoms to interact with before reaching the IC), or moving beyond the magnetosphere, where the Earth’s magnetic field deflects high-energy charged particles flowing from the sun (and other cosmic sources). Radiation can lead to temporary malfunctions or permanent damage to the IC, depending on the energy level and time of exposure.

Lastly, there’s the packaging used to protect and interconnect the IC. Packages can be very complex, varying as widely as the ICs they contain. They’re typically made with a conductive alloy lead frame that holds the IC die (or dies) and provides some mechanism of connecting it to the lead frame (flip-chip direct die attach or metal wires). The lead-frame assemblies are encapsulated with various materials, such as epoxy or ceramic, to protect the IC lead-frame assembly and provide mechanical stability.

Packages can have many different failure modes, including a loss of hermeticity (allowing contamination of the die), mechanical failures such as delamination or cracking, and many others.

Reliability Options

To ensure uniform quality, the defense industry created various standards in the 1970s. The Joint Army Navy (JAN) task force introduced the military general standard 38510 (MIL-M-38510), along with standard test method 883 (MIL-STD-883). To qualify, manufacturers had to submit device candidates to the Military Parts Control Advisory Group (MPCAG). Once approved, they would be listed as a qualified source to defense industry designers, which included periodic audits to confirm compliance. The additional testing and screening increased costs substantially, but guaranteed the performance and reliability required for harsh military environments.

In the 1980s, amid an increased number of military system failures, semiconductor manufacturers requested that the government improve the process. In 1986, the U.S. Secretary of Defense announced the Standardized Military Drawing (SMD) program for microcircuits, based on previous work done by the Defense Electronics Supply Center (DESC) in the late 1970s. This new program added various requirements to existing documents and renegotiated with manufacturers for compliance. Today, the SMD program is used throughout the defense industry and provides a uniform and cost-effective way to procure high-reliability devices.

As the automotive industry began using more ICs, quality and reliability became a greater issue. In the early 1980s, Chrysler, Ford, and Delco Electronics (a division of General Motors) formed the Automotive Electronics Council (AEC). The AEC developed several standards, most notably AEC-Q100, which specified stress testing qualification for ICs. The Q100 specification had similarities to its military cousins, but was created with the economics of the automobile industry in mind. Today, Q100-qualified components are available from most IC manufacturers and are an excellent alternative to standard industrial devices for improved reliability with reasonable cost.

Enhanced (Plastic) Products

Early in the 21st century, the military and defense industry began trying to lower the cost of electronic systems. They approached manufacturers about how to provide high-quality and reliable components that could still operate in harsh environments, but at a lower cost.

The table compares various semiconductor device qualifications.

Texas Instruments, along with other suppliers, responded with families of high-reliability plastic devices called enhanced product (EP) or enhanced plastic (see figure). For instance, the TPS7A4001-EP is the enhanced version of the commercial TPS7A4001 high-input-voltage low-dropout (LDO) regulator. The EP version has similar specifications to the commercial device, but provides an extended military temperature range (−55°C to +125°C), gold-bond wires, and no pure tin to mitigate whisker growth.

The EP product family also adds traceability and several other reliability enhancements, such as improved die attach, to mitigate delamination. Devices follow a similar flow to Q100, but use a controlled baseline that ensures more uniform performance between wafer lots. Other suppliers provide EP products that follow similar flows, with the same concept of providing high reliability economically.

Another major issue that the defense industry faces is obsolescence. Because of how the government orders systems, a product may stay in production for more than 20 years. Many consumer products have production life spans of six months to a few years (like cellphones). If a component is no longer available following a production run of several million units, it’s less of an issue. However, a military system can be extremely complex, and redesigning because of obsolescence can be costly.

Since EP products follow a single flow, even in situations where the commercial device is no longer available, EP devices have a much longer life span. So while the performance of EP devices is also enhanced, so is the availability.

The push for EP products is not only driven by the military, but the industrial markets as well. Many industrial systems must work in harsh environments and have reliability measured in tens of years. Applications such as power plants, water-treatment facilities, manufacturing sites, and many others rely on computerized controls that operate continuously and often require system shutdowns to replace. These systems can also benefit from EPs that add additional reliability over time—especially where long service life is required.

The Future of High Reliability

With space becoming more accessible, especially for low earth orbit (LEO), the cost of satellites (specifically smaller payloads such as cube-satellites) is a prime concern. Both the defense industry and commercial space systems providers are looking for an EP-like product for short-mission-life applications (such as LEO). So imagine a family similar to EP but with a known radiation tolerance—enter space EP (SEP).

As shown in the figure, SEP lies between QMLQ (Qualified Manufacturer List: Space) and QMLV (Qualified Manufacturer List: Hermetic) quality levels, but uses plastic packaging. It’s intended to be radiation-tested and assured, but removes expensive burn-in and wafer-lot life tests to control cost. With many LEO applications lasting three years or less, SEP is an economic compromise to more costly, QMLV fully space-qualified devices, which are intended for longer missions or higher radiation levels. SEP fills the gap for expendable lower-cost satellite and nonstrategic radiation applications, such as high-altitude unmanned aerial vehicles (UAVs).

Conclusion

Without reliable electronics, our world would look much different. Since the invention of the IC, the semiconductor industry has made great strides in improving the reliability of even the most commonplace devices. As we move farther into the 21st century, the requirements for additional levels of performance and reliability will continue to expand supplier portfolios with enhanced products that are both economical and highly reliable.

Space will drive the next level of reliability, adding radiation and high-altitude requirements. The physics of semiconductor processes continue to move to the ultra-small while continuing to mitigate failure mechanisms at the atomic scale. It’s a never-ending battle between cost and performance; component families such as EP and SEP will fill the required reliability versus cost gaps.

Viewing all 582 articles
Browse latest View live