Quantcast
Channel: Electronic Design - Analog
Viewing all 582 articles
Browse latest View live

IC Tackles Interface Idiosyncrasies of Biological-/Chemical-Sensing Probes

$
0
0

Targeted at electrode arrays used in wet-chemistry potentiostat configurations, the ADuCM355 provides current/voltage management and sensing plus advanced DSP and analytics.

Providing the interface circuitry for the sensors of physical parameters is a major role undertaken by many analog-circuit designers. However, even experienced engineers doing sensors for basics such as temperature, pressure, or position may be unfamiliar with the highly specialized probe arrangements that are standard in electrochemical sensing. These require potentiostat and electrochemical impedance spectroscopy (EIS) functionality to properly assess characteristics of solutions widely used in “wet chemistry” analysis.

To simplify the interface design and provide high-accuracy results, Analog Devices has introduced the ADuCM355 precision analog front end plus microcontroller for applications such as industrial gas sensing, instrumentation, vital-signs monitoring, and even disease management (Fig. 1).

1. The ADuCM355 for potentiostat-based chemical-solution assessment provides not only the sophisticated, precision analog front end, but also an advanced microcontroller for function management, security, and I/O—all at extremely low power.

The company claims that this highly integrated, single-chip device also has the most advanced sensor diagnostics, best-in-class low-noise and low-power performance, and smallest form factor of available approaches, which presently require multiple devices for comparable performance. Furthermore, it’s the only solution available that supports the basic dual-sensor potentiostat designs as well three- and four-sensor electrode implementations. The IC goes beyond being an analog interface, though—it includes a microcontroller based on the Arm Cortex-M3 processor specially designed to control and measure chemical and biosensors.

Sensor Specifics and the PGSTAT

In potentiostatic mode, a potentiostat/galvanostat (PGSTAT) controls the potential of the counter electrode (CE) (sometimes referred to as the auxiliary electrode, AE) versus the working electrode (WE), to ensure that the potential difference (voltage) between the WE and a reference electrode (RE) equals a user-specified value (Fig. 2).

2. This simplified, high-level block diagram of a potentiostat shows the relationship among the three electrodes and the circuitry needed for their analog input and output. (Source: Yihan Zhang, Columbia University)

In galvanostatic mode, the current flow between the WE and the CE is controlled. In a complete design, the voltage between the RE and WE and the current flow between the CE and WE are continuously monitored and controlled by a closed-loop, negative-feedback mechanism.

Just two electrodes are used in a basic measurement configuration, with the “chemistry of interest” at the WE; the CE functions as the other half of the cell, completing the circuit and maintaining a constant interface potential regardless of current value. However, it’s difficult to simultaneously maintain a constant CE potential as current is flowing, and compensate for voltage drop across the chemical solution itself.

To overcome those problems, the three-electrode approach of WE, CE, and RE is used (in some special configurations, a fourth electrode is added). The entire arrangement is somewhat analogous to using 3- or 4-wire Kelvin sensing to measure voltage across a load resistor or at contact points while eliminating the effects of contact resistance and IR drop in leads.

Among its features, the ADuCM355 offers:

  • Voltage, current, and impedance measurement
  • Dual ultra-low-power, low-noise potentiostats: 8.5 µA, 1.6 µV RMS
  • Flexible 16-bit, 400-ksample/s measurement channel with advanced sensor diagnostics
  • Integrated analog hardware accelerators
  • 26-MHz core, 128-kB flash, 64-kB SRAM
  • Security/safety via hardware crypto accelerator with AES-128/AES-256; hardware CRC with programmable polynomial generator; and read/write protection of user flash
  • Two precision voltage references
  • UART, I2C, and SPI serial input/output; up to 10 GPIO pins; external interrupt option; and general-purpose, wakeup, and watchdog timers

The IC is housed in a 6- × 5-mm, 72-lead LGA package and operates from a single 2.8- to 3.6-V supply. In hibernate mode, but with bias to external sensors, it requires just 8.5 µA while current drain in full shutdown mode is 2 µA. It’s fully specified for −40 to +85°C ambient operation and priced at $5.90 each in 1000-piece lots. The EVAL-ADUCM355QSPZ kit (Fig. 3) uses a PC plus USB connection so that users can evaluate the performance of the ADuCM355 across a range of different electrochemical techniques.

3. The PC-based, USB-linked EVAL-ADUCM355QSPZ kit lets users evaluate the ADuCM355 in various electrochemical techniques, including chronoamperometry, voltammetry, and electrochemical impedance spectroscopy.

References

Metrohm Autolab B.V, Autolab Application Note EC08, “Basic overview of the working principle of a potentiostat/galvanostat (PGSTAT) – Electrochemical cell setup”.

The Analytical Sciences Digital Library, “Analytical Electrochemistry: The Basic Concepts”.

University of Notre Dame, The Prashant Kamat Laboratory, “Potentiostat”.


Reducing EMI Radiated Emissions with TI Smart Gate Drive

$
0
0

Smart Gate Drive technology in TI motor drivers helps customers solve their radiated EMI issues without costly board revisions or extra test time.

Introduction

Radiated emissions testing for electromagnetic interference (EMI) can reveal issues that send engineers back to the drawing board to revise their product. Design revisions and additional test time increase product costs and delay schedules while engineers debug and solve EMI issues.

Smart Gate Drive technology in TI motor drivers helps customers solve their radiated EMI issues without costly board revisions or extra test time. With selectable IDRIVE currents for driving external FETs, EMI emissions from the motor-driver section of a system can be minimized by a simple serial interface (SPI) command or resistor change.

Continue Reading

Reduce Motor Drive BOM and PCB Area with TI Smart Gate Drive

$
0
0

TI's Smart Gate Drive features eliminate the need for external components while helping designers meet their EMI, robustness, and performance goals.

Introduction

In an ideal world, gate drivers would connect directly to MOSFET gates and the motor drive system would operate perfectly. However, the real world creates a variety of issues for motor drive designers that cause them to add extra external components between a gate driver's outputs and the external power-stage FETs. The example in Figure 1 shows that each external power MOSFET may need up to four additional components for a designer to mitigate possible FET gate drive issues. For a three-phase driver, a designer might use up to twenty-four external components between the gate driver IC and triple halfbridge FETs.

The main reasons designers add these components are to improve radiated electromagnetic interferance (EMI) performance, protect the gate driver and FETs, and eliminate unintentional FET turn-on from switching transients. Adding these components to the gate driver increases board area and bill of materials (BOM) cost in motor drive designs. TI's Smart Gate Drive features eliminate the need for these external components while helping designers meet their EMI, robustness, and performance goals.

Continue Reading

Motor Drive Protection with TI Smart Gate Drive

$
0
0

TI's Smart Gate Drive technology provides protection against MOSFET failures through the TDRIVE state machine making the system more robust and efficient while driving the external power MOSFETs.

Smart Gate Drive Video

$
0
0

As the most intelligent and highly integrated gate drive technology, TI's Smart Gate Drive is easy to use, offers system cost savings, and provides reliable protection. Learn more about these benefits and find the right smart gate driver for your next design.

Hybrid Cascodes Simplify SiC Adoption in Popular Power Circuits

$
0
0

Hybrid-cascode power transistors offer a convenient upgrade path from standard power MOSFETs and IGBTs when trying to leverage the performance advantages of wide-bandgap semiconductors.

Wide-bandgap (WBG) power semiconductors—and silicon-carbide (SiC) devices in particular—can help significantly improve the energy efficiency and reliability of various types of power converters. These include inverters for use cases such as electric vehicles, renewable-energy micro-generators, and data-center power supplies.

Other widely used circuits, such as choppers, half-bridges, totem-pole PFC stages, and soft-switched dc-dc converters, can also benefit from the low on-resistance and high breakdown voltage and high thermal conductivity inherent in SiC power transistors. However, performance is otherwise restricted by the relatively poor body-diode performance of standard SiC MOSFETs. A solution is at hand, though.

SiC JFETs are extremely robust transistors. Examples include the depletion-mode trench JFETs developed by UnitedSiC, which have no built-in body diode. Combining a SiC JFET with a low-voltage silicon MOSFET as a cascode pair creates a normally-off device that contains the intrinsic diode of a standard MOSFET and operates from standard gate drive voltages. It also benefits from the high efficiency of SiC and the robustness to withstand short-circuit and avalanche conditions.

The cascode switch’s body-diode behavior, in terms of forward voltage drop and reverse-recovery charge, shows that QRR changes very little even up to 200°C. The resulting device can help improve the efficiency and power density of power-conversion applications and serve as a drop-in enhancement over existing silicon (Si) devices.

Figure 1 shows how the SiC JFET and silicon MOSFET are combined in a cascode configuration. The MOSFET is normally off. The JFET is normally on and requires negative voltage to turn off. When the MOSFET is on, the JFET gate is shorted to source, ensuring the JEFT also stays on. When the MOSFET turns off, its drain voltage rises until the JFET gate-to-source voltage reaches about −7 V and turns the JFET off. The MOSFET drain then sits at about 10 to 15 V.

1. The cascode configuration of a SiC JFET and standard MOSFET.

The MOSFET is a low-voltage type and has a fast body diode with very low reverse-recovery charge and low voltage drop that has performance akin to a SiC Schottky diode. In fact, the reverse recovery of the JFET cascode surpasses that of a SiC MOSFET over temperature. The MOSFET is typically co-packaged with the SiC die.

Comparing Device Behavior

Hybrid SiC cascodes can be used with standard gate drivers, with no penalty in performance, and benefit from extremely low RDS(ON) per die area. In turn, the smaller die size allows for lower input and output capacitances (CISS, COSS), resulting in lower switching loss (EOSS). The combination of a low RDS(ON) and low EOSS results in a high RDS(ON) x EOSS figure of merit. Figure 2 compares key parameters of SiC cascodes and conventional silicon MOSFETs.

2. Properties of SiC cascodes compared with conventional SiC MOSFETs.

Further advantages of WBG cascodes include a natural clamping effect, resulting in robust performance under avalanche conditions. Momentary short-circuits of 4 µs or more are also handled well by devices with high saturation currents, aided by a positive temperature coefficient of on-resistance.

Unlike the other devices, the saturation current doesn’t depend on the gate drive voltage and remains near constant after full enhancement at gate-source voltage of about 8 V. Moreover, because cascode gate charge is significantly less than in IGBTs, gate-drive power requirements can be reduced considerably if the gate-voltage swing is lowered.

Cascodes also enable a wide gate-drive voltage swing of up to ±25 V, and thus are directly compatible with systems originally designed for Si or SiC MOSFETs. Even IGBT gate-drive swings of typically +15/−9 V are acceptable. Hence, they can be used as drop-in replacements, either to upgrade existing devices in pursuit of better performance, or to replace obsolete components. A battery-charger manufacturer was able to improve efficiency by 1.5% and increase power throughput by 30% at the 10-kW level, simply by replacing IGBTs with SiC cascodes.

Cascodes are available in standard TO-247 power packages, which allows them to directly replace IGBTs or Si/SiC MOSFETs. On the other hand, minor changes to the gate-drive circuit can further optimize the solution.

Figure 3 shows a typical circuit with separate values for R(ON) and R(OFF), which gives effective control of dV/dt and di/dt levels. The ferrite bead damps oscillations as necessary depending on layout. In addition, because the cascode configuration eliminates Miller capacitance, a negative gate-drive voltage isn’t needed to prevent dV/dt effects at the drain that causes injection of current into the gate, which can result in spurious turn-on.

3. Shown is a SiC cascode gate-drive circuit.

Board designers should always pay careful attention to the layout around the gate, as with any switch type. They should follow good practices to minimize inductance in the source connection so as to prevent voltage transients due to channel di/dt coupling into the gate.

The comparison table in Fig. 2 also shows that the intrinsic diode-recovery charge of SiC cascodes compares favorably and, when combined with its low forward drop VF, gives minimum energy loss in circuits where the diode conducts.

Figure 4 compares the switching waveforms of a cascode and a SiC MOSFET, operating at 150°C, with and without an external diode, driving an 800-V inductive load. Using the double-pulse method, the cascode diode shows shorter recovery time (tRR) and lower peak recovery current (IRM).

4. Comparison of intrinsic diode reverse-recovery characteristics. The UJC1210K has the lowest QRR.

SiC Cascodes in Practice

The near-ideal parameters, combined with small die size, of SiC cascodes make them a strong choice for new designs in important applications such as ac-dc converters, inverters, dc-dc converters, welders, class-D audio amplifiers, EV/HEV traction motors, and others—in addition to replacing IGBTs or standard silicon MOSFETs in legacy systems. In “clean-sheet” projects, designers also have extra freedom to take advantage of the high-frequency capability of these devices to specify smaller-size magnetics and passive components.

Major benefits are seen particularly in bridgeless totem-pole power-factor-correction (PFC) circuits (Fig. 5). Conventional silicon devices with slow body diodes have restricted the performance of such circuits in the past—they forced the use of the variable-frequency critical conduction mode that sets switching current to zero at the end of each conduction period. This mode produces high peak currents with consequent stress, necessitating oversized components. Adding reverse-blocking and parallel-conducting diodes helps, but greatly increases component count.

5. Here’s a bridgeless totem-pole PFC circuit.

Cascode SiC JFETs allow the circuit to be operated in continuous conduction mode with lower peak currents. This brings about higher efficiency, smaller inductor size, and simplified filtering with lower EMI problems owing to the fixed operating frequency. To illustrate, one circuit that used UnitedSiC UJC06505K devices at 1.5 kW and 230 V ac-line achieved efficiency as high as 99.4%.

While the efficiency of converter primary switches can be increased, SiC cascodes can also improve rectification for high-voltage dc outputs when configured for synchronous rectification. Figure 6 shows an example circuit. In so-called third-quadrant operation, current flows from source to the drain of one or other of the cascodes through the output inductor to the load. Current flow through the body diode sets the JFET gate-source voltage to approximately +0.7 V, turning it fully on automatically. If the cascode gate is set high, the 0.7-V body-diode drop from the silicon MOSFET is bypassed, leaving only the resistance of the JFET, the same as with forward conduction.

6. Synchronous rectification using SiC cascodes.

Conclusion

Wide-bandgap power transistors offer several advantages over conventional silicon MOSFETs and IGBTs, including smaller die size, lower on-resistance with high breakdown voltage ratings, and switching characteristics almost independent of junction temperature.

Designers can adopt these devices as drop-in replacements in existing products, or in new products being designed from the ground up. The SiC cascode is a novel class of device that allows designers to gain many of the advantages of WBG technology using existing gate-drive circuits, or with minimal changes to fine-tune performance.

Precision Op Amps Yield High-Accuracy Circuits (.PDF Download)

$
0
0

The ordinary modern IC op amp is fully applicable to most standard amplifier configurations. Error sources are small and may be of no concern, or one can easily compensate for them. Yet there are some types of circuits where inherent but small op-amp error sources can’t be tolerated. As a result, precision IC op amps have been created to address these needs. Three examples here illustrate some common problems and solutions.

Zero-Drift Amplifiers

A zero-drift amplifier is one whose output doesn’t change significantly as a result of the amplification of negative physical characteristics like input offset voltage. Op amps are high-gain dc differential amplifiers that commonly have mismatched input components and device limitations that introduce error signals.

The most common and detrimental error is input offset voltage. This characteristic is caused by small differences in the input differential transistors’ or any related resistors. This error is usually very small, and in many applications, it can be ignored as it doesn’t cause any detrimental effects. However, in applications where very small input signals are to be amplified with very high gain, this unwanted error voltage is amplified along with the desired input. The output, therefore, isn’t representative of the true input.

In addition, the input offset voltage error varies with temperature, introducing further obfuscations of the true signal. This drift is usually stated in microvolts per degree Celsius (µV/°C).

This problem has been known for decades. In applications that can’t tolerate such errors and drift, compensation methods have been devised. Most older and a few current IC op amps have a separate null or trim pin input to which a corrective voltage can be applied. Another solution is to add a resistor in the non-inverting input, which will correct for bias current variations. These methods don’t correct the drift, though. Precision op amps are available to solve these problems.

Zero-drift amplifiers feature an input offset voltage of less than a microvolt. Drift is reduced to fractions of a nanovolt (nV) variation per °C. Some of their key features include CMOS semiconductor technology, rail-to-rail input and output, and the use of chopping techniques.

The term rail-to-rail refers to the ability of an op amp to achieve an output or input that can vary over the full range of the dc supply voltages. This feature is realized with CMOS circuits. For instance, if an op amp uses ±5-V supplies, the output can swing over the full range from – 5 V to + 5 V or 10 V. For a single-supply op amp, the range would be 0 to + 5 V. The actual output limits would be within about 10 mV of the supply limits. This definition also applies to the input-voltage range.

Precision Op Amps Yield High-Accuracy Circuits

$
0
0

Sponsored by Texas Instruments: Special op amps are available to deal with drift, high- and low-input voltages, noise, and error sources in critical applications such as medical instrumentation.

Download this article in PDF format.

The ordinary modern IC op amp is fully applicable to most standard amplifier configurations. Error sources are small and may be of no concern, or one can easily compensate for them. Yet there are some types of circuits where inherent but small op-amp error sources can’t be tolerated. As a result, precision IC op amps have been created to address these needs. Three examples here illustrate some common problems and solutions.

 Sponsored Resources: 

Zero-Drift Amplifiers

A zero-drift amplifier is one whose output doesn’t change significantly as a result of the amplification of negative physical characteristics like input offset voltage. Op amps are high-gain dc differential amplifiers that commonly have mismatched input components and device limitations that introduce error signals.

The most common and detrimental error is input offset voltage. This characteristic is caused by small differences in the input differential transistors’ or any related resistors. This error is usually very small, and in many applications, it can be ignored as it doesn’t cause any detrimental effects. However, in applications where very small input signals are to be amplified with very high gain, this unwanted error voltage is amplified along with the desired input. The output, therefore, isn’t representative of the true input.

In addition, the input offset voltage error varies with temperature, introducing further obfuscations of the true signal. This drift is usually stated in microvolts per degree Celsius (µV/°C).

This problem has been known for decades. In applications that can’t tolerate such errors and drift, compensation methods have been devised. Most older and a few current IC op amps have a separate null or trim pin input to which a corrective voltage can be applied. Another solution is to add a resistor in the non-inverting input, which will correct for bias current variations. These methods don’t correct the drift, though. Precision op amps are available to solve these problems.

Zero-drift amplifiers feature an input offset voltage of less than a microvolt. Drift is reduced to fractions of a nanovolt (nV) variation per °C. Some of their key features include CMOS semiconductor technology, rail-to-rail input and output, and the use of chopping techniques.

The term rail-to-rail refers to the ability of an op amp to achieve an output or input that can vary over the full range of the dc supply voltages. This feature is realized with CMOS circuits. For instance, if an op amp uses ±5-V supplies, the output can swing over the full range from – 5 V to + 5 V or 10 V. For a single-supply op amp, the range would be 0 to + 5 V. The actual output limits would be within about 10 mV of the supply limits. This definition also applies to the input-voltage range.

1. This complementary pair input arrangement permits CMOS op amps to handle rail-to-rail input signals.

To achieve the rail-to-rail feature, the zero-drift amplifiers use the input arrangement shown in Figure 1. The differential input signal is applied simultaneously to both the PMOS and NMOS pairs. The NMOS devices deal with input signals in the (VDD– 1.8 V) to VDD range, while the PMOS pair handles the voltages from VSS to (VDD− 1.8 V). As the input signals transition from one range to another, they introduce a form of crossover distortion. This distortion is removed by internal correction circuitry (discussed later).

Another way to achieve the rail-to-rail input range is to use a single input differential pair with an internal charge pump. The charge pump increases the input amplifier voltage by about 1.8 V above VDD. This eliminates the crossover distortion.

The low offset and zero-drift characteristic is sometimes achieved by laser or other trimming of the input components during manufacturing. Most newer precision amplifiers use an internal correction technique known as chopping to realize the zero-drift characteristic. Also known as auto-zero amplifiers, these devices employ internal switching circuits that “sample” the offset and cancel it out.

Figure 2 shows one of several possible configurations. The input is applied simultaneously to a conventional amplifier and a nulling amplifier (upper path in the figure). The nulling amplifier looks at the input offset of the standard amplifier. In the first phase of the switching cycle, the offset is amplified and stored on capacitor CC. In the second phase of the switching cycle, the signal’s polarity is reversed so that the earlier charge is effectively cancelled.

2. The generalized precision chopper amplifier produces near-zero offset and virtually no drift or 1/f noise.

The continuous charging and discharging produces an average output across CC of zero. The switching occurs at a frequency usually in the 1- to 50-kHz range. This creates some transients—these are removed with a synchronous notch filter before arriving at the corrected output. 

The overall result is an offset voltage of typically less than a microvolt and a drift specification of a fraction of a µV/°C. The whole process also eliminates the crossover distortion and the usual ever-present 1/f noise that occurs in op amps at low frequencies.

Zero-Drift Applications

Any application that involves the amplification of very low signal levels is a candidate for zero-drift amplifiers, because any significant input offset voltage will introduce errors. Some of the primary applications include bridge amplifiers using strain gauges or other sensors, current shunt measurement, thermocouples, IR sensors, electronic scales (load cells), and medical instrumentation. Other uses are ADC input buffer amplifiers and DAC output amplifiers.

Figure 3 shows how current is determined by measuring the voltage across a 0.1-Ω sense resistor (Rs). A current of 1 A will produce a voltage drop of 100 mV, but a current of 1 mA will only produce a voltage of 100 µV. Significant amplification is needed to achieve a usable output measurement. Any major input offset voltage will introduce a huge error.

3. A precision zero-drift op amp provides an accurate amplification of voltage and the determination of current in a sense resistor.

Figure 4 shows an instrumentation amplifier (IA) made up of three zero-drift precision op amps used as bridge amplifier with a strain gauge. Tiny variations in strain-gauge resistance with applied force translate into very small voltages (µV) that must be amplified to be properly measured. Rg sets the IA gain, but additional gain may be obtained with another precision amp in the signal path if needed. Zero-drift IAs ensure that op-amp errors are minimal and the outputs are accurate. The link referenced below provides more details on the zero-drift methods and circuits.

4. Bridge measurement circuits with selected resistive sensors produce tiny voltage output variations that require significant amplification to produce a useful output. A precision zero-drift op amp serves that purpose.

Attenuator Amp Design Maximizes Input Voltage of Differential ADCs

While most op-amp designs feature gain, there are some special applications where an attenuation factor is required. An example is in industrial applications where the output of a sensor or other input is greater than the maximum allowed by an ADC. An attenuator amplifier can meet that requirement. Since most ADC inputs are differential, a fully differential amplifier (FDA) with both differential inputs and outputs is needed.

Figure 5 shows a typical arrangement. The fully differential amplifier outputs drive the ADC. A follower op amp provides a high input impedance for the input source. The input is single-ended.

5. This precision op amp is used as an attenuator and buffer for an ADC.

The gain (or attenuation factor in this case) is a function of the differential amplifier resistors.

Gain (attenuation) = Vodiff/Vi = Rf/Rg

Making Rf smaller than Rg provides attenuation with amplifier buffering to the ADC. The maximum input to the ADC is usually the reference value, for example 5 V. The following reference provide more details on the challenges that engineers face when attenuating large signals for digitization and maximizing the input voltage to differential input ADCs.

MUX-Friendly Precision Op Amps

Another application that often requires a precision op-amp solution is a multichannel data-acquisition system using multiplexers (MUX). The typical data-acquisition system employs a single ADC that serves multiple inputs. A multiplexer selects the desired input for conversion. One common arrangement uses a precision op amp at the output of the MUX to drive the ADC.

Figure 6 shows the arrangement. The protection diodes are usually necessary to prevent damage caused by differential voltages higher than the amplifier’s maximum rating. This could occur when switching from one channel to another with extreme voltages and/or polarity differences.

6. Here, a precision op amp buffers the output of a multiplexer to an ADC in a data-acquisition system.

Since multiplexers switch very fast (nanoseconds) when changing input channels, the buffering amplifier must be able to keep up. Critical specifications include a fast slew rate and a fast settling time. Both JFET and CMOS precision op amps can be used. Some of the newer devices also have the protection diodes or their equivalent integrated into the IC.

 Sponsored Resources: 


Selecting Film or Electrolytic Capacitors for Power-Conversion Circuits

$
0
0

Capacitors can provide vital ride-through (or hold-up) energy or mitigate ripple and noise in power-conversion circuits. Choosing the right type can profoundly affect a system’s overall size, cost, and performance.

With their low equivalent series resistance (ESR), which allows for good ripple-current handling as well as high surge-voltage ratings and self-healing properties, film capacitors are strong candidates for many power-conditioning duties in key applications like electric vehicles, renewable energy, and industrial drives. They’re especially suited to scenarios where no hold-up (or ride-through) is required, such as in the event of an outage or between line-frequency ripple peaks, and where there’s a need to source or sink large high-frequency ripple currents with high reliability and low losses.

Film capacitors are also an excellent fit in applications that operate high dc bus voltages to minimize ohmic losses. Because aluminum electrolytic capacitors are only available with ratings up to about 550 V, applications operating at higher voltages require multiple devices to be connected in series. It then becomes necessary to prevent voltage imbalance, either by selecting capacitors with matched values, which is expensive and time-consuming, or adding voltage-balancing resistors that impose additional energy losses and BOM cost.

On the other hand, aluminum electrolytic remain a strong choice when sheer energy storage density (joules/cm3) is the prime concern. One example is in commodity offline power supplies, where cost-effective bulk energy storage is needed to maintain the dc output voltage in the event of power outage, without battery backup. Suitable derating can mitigate the lifetime and reliability issues often associated with aluminum electrolytics.

However, it’s true that aluminum-electrolytic capacitors can only tolerate overvoltages of about 20% before damage occurs, whereas film capacitors can withstand exposure to voltages up to about double their rating for short periods. Self-healing ensures safer response to occasional stresses, as typically encountered in real-world applications.

In addition, film capacitors can provide easier connection and mounting options, and are non-polarized and hence immune to reverse-connection errors. They’re often packaged in insulated, volumetrically efficient rectangular “box” enclosures. Various electrical connection types are available, such as screw terminals, lugs, “fastons,” or bus bars.

Table 1 compares properties of film-capacitor types in common use. Polyester types are utilized at low voltages, while polypropylene typically exhibits the lowest losses and highest reliability under stress thanks to its low dissipation factor (DF) and high dielectric breakdown per unit thickness. The DF is also relatively stable with temperature and frequency. Segmented high-crystalline metallized polypropylene is also available, and offers energy density comparable to that of aluminum electrolytics.

Table 1. Characteristics of common film-capacitor types. (Source: Wikipedia: Film Capacitor)

Choosing the Right Capacitor

Analyzing some common power-conversion circuits can show how capacitor technology selection profoundly influences size, weight, and cost, depending on whether capacitance is needed for energy storage or to handle ripple or noise.

For example, comparing electrolytic and film capacitors when used as bulk capacitance for a 1-kW offline converter starkly illustrates the differences between properties of the two types. The converter, as shown in Figure 1, features a power-factor-corrected front end and has nominal dc-bus voltage (Vn) of 400 V.

1. Capacitance as energy storage to ride-through power outage.

Let’s assume efficiency is 90% and dropout voltage (Vd) 300 V, below which output regulation is lost. If an outage occurs, the bulk capacitor C1 supplies energy to maintain constant output power as the bus voltage drops from 400 V toward 300 V. We can calculate the value of C1 needed to give 20-ms ride-through before the voltage falls below 300 V:

A 680-µF, 450-V aluminum-electrolytic capacitor from the TDK-EPCOS B43508 series, in a 35-mm-diameter × 55-mm case size, meets the requirement with overall volume of 53 cm3 (about three cubic inches). In contrast, a solution using film capacitors would be impractically large: Up to 15 TDK-EPCOS B32678 film capacitors may need to be connected in parallel, resulting in a total volume of 1500 cm3 (91 cubic inches).

The choice would change dramatically if the capacitor were needed only to control ripple voltage on a dc line, such as in an EV powertrain. The bus voltage could be 400 V, as before, but supplied by a battery, so there’s no ride-through requirement. It would be realistic to seek to limit the ripple within say 4 V rms, while a downstream converter draws 80-A rms pulse-current at a switching frequency of 20 kHz. The capacitance required is:

A 180-µF, 450-V electrolytic capacitor from the TDK-EPCOS B43508 series has a ripple-current rating of about 3.5 A rms at 60°C, including frequency correction. To handle 80 A would require 23 capacitors in parallel, giving unnecessary large capacitance of 4140 µF and a total volume of about 1200 cm3 (73 cubic inches). This concurs with the 20-mA/µF rule of thumb for electrolytic-capacitor ripple-current ratings.

Using film capacitors from the TDK-EPCOS B32678 series, just four devices in parallel give a 132-A rms ripple-current rating in a volume of 402 cm3 (24.5 cubic inches). Moreover, if the ambient temperature can be expected to remain below 70°C, capacitors in an even smaller case size can be chosen.

There are other reasons that make film capacitors a superior choice. The excessive capacitance of the parallel electrolytics could cause problems such as controlling the energy in inrush current. In addition, film types are far more robust in the event of transient overvoltages on the dc-link connection, which are common in light traction applications such as electric vehicles.

Similar analysis would be valid for applications such as UPS systems, power conditioning in wind or solar generators, general grid-tied inverters, and welders.

Film as First Choice

The relative costs of film or electrolytic capacitors can be analyzed from a bulk-storage or ripple-capability standpoint. Figures published in 2013 compare typical costs for a dc bus powered by a rectified 440-V ac supply (Table 2).

Table 2. Cost comparison between film and electrolytic capacitors.

With this analysis in mind, film capacitors are an excellent choice for decoupling, switch snubbing, and filtering applications such as EMI suppression or inverter-output filtering.

A decoupling capacitor placed across the dc bus of an inverter or converter provides a low-inductance path for circulating high-frequency currents. A rule of thumb is to use about 1 µF per 100 A switched. It’s worth noting that connections to the capacitor should be kept as short as possible to avoid inducing transient voltages. When current is large and frequency is high, changes of 1000 A/µs are possible. Considering that PCB traces can have inductance of about 1 nH/mm, each millimeter can be 1 V transient according to:

In a switch-snubbing circuit, the capacitor is placed in series with a resistor/diode combination and connected across the power switch—typically an IGBT or MOSFET—to control dV/dt (Fig. 2). The snubber slows ringing, controls EMI, and prevents spurious turn-on/turn-off. The snubber capacitance is typically chosen to be about twice the sum of the switch output capacitance and mounting capacitance. The resistance value is then chosen to critically damp any ringing.

2. Switch snubbing of IGBT or MOSFET.

EMI Suppression

Film capacitors are also ideal as X and Y capacitors to reduce differential-mode and common-mode noise, respectively (Fig. 3), leveraging their self-healing and transient-overvoltage capabilities. Safety-rated X1 (4kV) or X2 (2.5 kV) capacitors are connected across power lines, and are typically polypropylene types with capacitance value in the microfarads as needed to comply with applicable EMC standards.

3. X and Y capacitors for EMI suppression.

Y capacitors with low connection inductance are connected in line-to-earth positions. In Fig. 3, the Y1 or Y2 capacitors, rated for 8-kV and 5-kV transients, respectively, are connected in line-to-earth positions as shown. Leakage current considerations limit the amount of capacitance that can be applied. Although the low connection inductances of film capacitors help keep self-resonances high, external connections to the ground system should also be kept short.

Inverter Output Filtering

Non-polarized film capacitors combined with series inductors, often in a single module, create low-pass filters for attenuating high-frequency harmonics in the ac output of drives and inverters (Fig. 4). These are increasingly used for meeting system EMC requirements and reducing dV/dt-related stress on cabling and motors, particularly when the load is distant from the drive unit.

4. Film capacitors are used in motor-drive EMC filtering.

Conclusion

Knowing the relative strengths of electrolytic and film capacitors for power-conversion applications can help designers make the right choices for optimum overall size, weight, and bill-of-materials cost. They can be summarized as follows:

Electrolytic capacitors:

  • Higher stored energy density (joules/cm3)
  • Lower cost of bulk capacitance for “ride-through” of dc bus voltage
  • Maintain ripple current rating at higher temperatures

Film capacitors:

  • Lower ESR for superior ripple handling
  • Higher surge-voltage ratings
  • Self-healing boosts system reliability and lifetime

Rudy Ramos is Project Manager - Technical Content Marketing at Mouser Electronics.

The High-Performance RF MEMS Switch Has Arrived (.PDF Download)

$
0
0

It’s been obvious for decades that the microelectromechanical systems (MEMS) switch could potentially replace PIN diode, mechanical, FET, and other types of switches in a broad swath of RF and microwave applications. A MEMS switch has almost everything an engineer could ask for: It’s smaller and lighter than any other switch technology; has very little insertion loss; provides very high isolation; can operate well into the millimeter-wave region; has exceptionally low intermodulation distortion; and can handle reasonable amounts of RF power.

So why didn’t RF MEMS switches take the world by storm years ago? The answer lies in the obstacles that have defied developers’ efforts to tame them, primarily suitability for low-cost mass production and demonstrated reliability over billions of switching cycles.

However, it now appears that 35 years after IBM described the first “microelectromechanical” switch, and more than a quarter-century after Analog Devices introduced the world’s first commercial MEMS product (an accelerometer), the technology has finally overcome these hurdles. To understand why it took so long to get here, it’s important to understand the topsy-turvy development of the MEMS switch.

MEMS Memory Lane

IBM’s development marked the first time that semiconductor fabrication techniques were used to make tiny mechanical structures in silicon that were moved electrically. It showed that such a device could combine the benefits of semiconductor manufacturing processes with the inherent characteristics of electromechanical relays. The rewards were potentially so enormous, especially for defense systems, that Hughes Research Labs, Raytheon, Rockwell, and others devoted large sums of money and time to bring RF MEMS switches to fruition.

Their work solidified their belief that not only could these devices be smaller than even the smallest PIN diode, FET, or other solid-state switch, they could be orders of magnitude smaller than electromechanical types that, although invented before the Civil War, resigned supreme (and are still in use today). The next step seemed obvious: Refine the manufacturing process and solve reliability issues, and a few other less-thorny problems. After masses of journal papers were written, patents applied for, and other efforts by dozens of organizations throughout the world, commercial devices remained as far away as ever.

By this time, in the early 2000s, MEMS marketing had reached a fever pitch, promising that the remaining problems associated with MEMS switches would be soon swept away. Startups were formed to fabricate and develop commercial products, and defense and commercial customers awaited the outcome. What they got instead were sampled devices that delivered neither the required reliability or wafer-to-wafer consistency that had been promised.

The High-Performance RF MEMS Switch Has Arrived

$
0
0

Sponsored by Digi-Key and Analog Devices: After years of trials and tribulations from all corners of the industry to nail down the technology, two MEMS SP4T switches overcome those nagging RF hurdles.

Download this article in PDF format.

It’s been obvious for decades that the microelectromechanical systems (MEMS) switch could potentially replace PIN diode, mechanical, FET, and other types of switches in a broad swath of RF and microwave applications. A MEMS switch has almost everything an engineer could ask for: It’s smaller and lighter than any other switch technology; has very little insertion loss; provides very high isolation; can operate well into the millimeter-wave region; has exceptionally low intermodulation distortion; and can handle reasonable amounts of RF power.

So why didn’t RF MEMS switches take the world by storm years ago? The answer lies in the obstacles that have defied developers’ efforts to tame them, primarily suitability for low-cost mass production and demonstrated reliability over billions of switching cycles.

However, it now appears that 35 years after IBM described the first “microelectromechanical” switch, and more than a quarter-century after Analog Devices introduced the world’s first commercial MEMS product (an accelerometer), the technology has finally overcome these hurdles. To understand why it took so long to get here, it’s important to understand the topsy-turvy development of the MEMS switch.

 Sponsored Resources: 

MEMS Memory Lane

IBM’s development marked the first time that semiconductor fabrication techniques were used to make tiny mechanical structures in silicon that were moved electrically. It showed that such a device could combine the benefits of semiconductor manufacturing processes with the inherent characteristics of electromechanical relays. The rewards were potentially so enormous, especially for defense systems, that Hughes Research Labs, Raytheon, Rockwell, and others devoted large sums of money and time to bring RF MEMS switches to fruition.

Their work solidified their belief that not only could these devices be smaller than even the smallest PIN diode, FET, or other solid-state switch, they could be orders of magnitude smaller than electromechanical types that, although invented before the Civil War, resigned supreme (and are still in use today). The next step seemed obvious: Refine the manufacturing process and solve reliability issues, and a few other less-thorny problems. After masses of journal papers were written, patents applied for, and other efforts by dozens of organizations throughout the world, commercial devices remained as far away as ever.

By this time, in the early 2000s, MEMS marketing had reached a fever pitch, promising that the remaining problems associated with MEMS switches would be soon swept away. Startups were formed to fabricate and develop commercial products, and defense and commercial customers awaited the outcome. What they got instead were sampled devices that delivered neither the required reliability or wafer-to-wafer consistency that had been promised.

As Will Rogers famously said, “you never get a second chance to make a first impression,” and that appeared to be the fate of the MEMS switch. The unfulfilled promises made designers wary of using MEMS switches and sometimes MEMS technology in general. MEMS developers folded or were absorbed by larger companies, and the technology’s remaining champions doubled down to make sure that the next chance, if it came, would convince the skeptics.

Analog Devices was the first to stand and deliver. Other device prototypes were demonstrated by Radant MEMS (later absorbed by CPI) and TeraVicta Technologies (now defunct), which has left Analog Devices is one of the few remaining manufacturers of commercially available packaged MEMS switches, which is still the case today.

Surviving and Thriving with MEMS

It’s an enviable position for the company, as the wireless industry is faced with the need to switch more frequency bands within the confines of a smartphone. For instance, the defense industry has wholly adopted the active electronically steered array (AESA) radar architecture, in which a switch is required at each of its hundreds or thousands of elements.

Among the host of other applications that could benefit is filter-bank switching in measurement systems, which needs streamlining. That includes the possibilities presented by IoT applications in which small size, high performance, and (ultimately) very low cost are key drivers.

1. The ADGM1004 SP4T MEMS switch sits on top of a “classic” DPDT electromechanical relay.

Having been involved with MEMS development for three decades, Analog Devices made sure that when it introduced its latest MEMS SP4T switches (Fig. 1), the ADGM1004 and ADSM1304, that operate from dc to 13 and 14 GHz, respectively, all likely questions would be answered.

For example, both devices are about 95% smaller, 10X more reliable, 30X faster, and consume 10X less power than a typical RF switch. They’re available in quantity, have demonstrated reliability over more than 1 billion switching cycles, and use advanced packaging techniques to ensure high resistance to electrostatic discharge, the latter being one of the major challenges over the years.

The switches are also characterized for hot-switching, an undesirable condition in which RF power is passed through the switch before it’s closed, typically reducing operating life by a factor of up to 10. In contrast, the ADGM1004 and AGSM1304 have been demonstrated to have an MTBF of at least 3.4 billion cycles at an operating frequency of 2 GHz with a 10-dBm RF signal.

2. Shown is the MEMS die and driver IC in the SMD QFN package.

The switches contain two die, the switch and the low-voltage, low-current supply IC, to drive it (Fig. 2). The driver IC is compatible with CMOS logic drive voltages, and the pair are housed in a 5- × 4-mm, lead-frame chip-scale package.

The switching mechanism is an electrostatically actuated, gold, cantilever-beam that resembles a tiny relay with metal-to-metal contacts that are actuated by relatively high dc voltage. Four of the cantilever switch beams (for a SP4T configuration) are shown in Fig. 3.

3. Four MEMS cantilever switch beams are used in the SP4T switches.

The ADGM1004 is designed to deliver extremely high ESD resistance using a dedicated circuit to produce a human-body-model (HBM) rating of 5 kW at the RF1 through RF4 pins, and RFC pins and 2.5 kW for all others—claimed to be the highest of any MEMS switch in the industry. The circuit has little or no effect on RF switching performance. Key specifications are shown in the table.

Unlike the many other functions for which MEMS devices have been commercialized for many years, RF switches present unique challenges. Achieving wide bandwidths, high reliability, and commercial producibility in a tiny RF MEMS switch has been an elusive goal since the technology was first realized. However, after countless efforts by organizations ranging from DARPA to defense contractors, academia, private research organizations, and other entities, the path is now open for MEMS to deliver performance unachievable by any other technology.

 Sponsored Resources: 

Related Reference:

Video: Fundamentals of Analog Devices’ RF MEMS Switch Technology

Regulator Review: Wring the Best Performance Out of Your LDO (.PDF Download)

$
0
0

A low-dropout (LDO) regulator is a simple, inexpensive way to generate a regulated output voltage from a higher-voltage input. It’s inherently low-noise because it has no switching transients, and it requires few if any external components, so it takes up little board space.

The LDO is easy to use right out of the box, but that doesn’t mean its performance can’t be improved. In this article, we’ll review the principles of operation of the standard regulator and the LDO, and discuss a couple of ways to boost its noise performance.

Transforming a Standard Regulator into an LDO

A linear regulator is inefficient because it dissipates power across the regulation device to regulate the output voltage. The regulation device is typically a power transistor—either a bipolar device or a FET.

1. The basic PMOS and NMOS linear regulator architectures. (Source: TI Training: LDO Basics—Dropout Voltage video)

Figure 1 shows MOSFET linear-regulator circuits with both PMOS and NMOS power transistors. In both circuits, the voltage across the regulator’s power transistor is:

VDS = VIN - VOUT  = IOUT× RDS

where RDS is the FET’s drain-to-source resistance. VDS depends on the FET’s gate-to-source voltage (VGS).

Regulator Review: Wring the Best Performance Out of Your LDO

$
0
0

Sponsored by Texas Instruments: Follow these practical steps to augment the efficiency of this fundamental building block in circuit design.

Download this article in PDF format.

A low-dropout (LDO) regulator is a simple, inexpensive way to generate a regulated output voltage from a higher-voltage input. It’s inherently low-noise because it has no switching transients, and it requires few if any external components, so it takes up little board space.

The LDO is easy to use right out of the box, but that doesn’t mean its performance can’t be improved. In this article, we’ll review the principles of operation of the standard regulator and the LDO, and discuss a couple of ways to boost its noise performance.

 Sponsored Resources: 

Transforming a Standard Regulator into an LDO

A linear regulator is inefficient because it dissipates power across the regulation device to regulate the output voltage. The regulation device is typically a power transistor—either a bipolar device or a FET.

1. The basic PMOS and NMOS linear regulator architectures. (Source: TI Training: LDO Basics—Dropout Voltage video)

Figure 1 shows MOSFET linear-regulator circuits with both PMOS and NMOS power transistors. In both circuits, the voltage across the regulator’s power transistor is:

VDS = VIN - VOUT  = IOUT× RDS

where RDS is the FET’s drain-to-source resistance. VDS depends on the FET’s gate-to-source voltage (VGS).

The regulator control circuit uses R1 and R2 to divide down VOUT and compares the scaled value to a reference voltage (VREF). The resulting error signal drives the FET’s gate voltage (VG); this, in turn, controls VGS to regulate VOUT against changes in load current (IOUT) or input voltage (VIN). In these basic designs, VG can range between the error amplifier’s positive and negative supply rails: VIN and ground.

The minimum allowed value of VDS is called the dropout voltage (VDO). For proper operation, VIN≥ VOUT + VDO.  If VIN falls below this value, the regulator will drop out of regulation and will not be able to maintain the desired output. Instead, the output voltage will track the input voltage minus the dropout voltage.

As its name implies, an LDO is designed to minimize the value of VDO. A state-of-the-art LDO can produce a regulated output with a dropout voltage as low as 100 mV. With the spread of low-voltage, power-efficient designs, LDOs are the dominant linear regulators today.

How do we turn a standard regulator design into an LDO? At the design stage, a larger-size power FET will yield a lower dropout voltage because it has a lower resistance. However, the architecture of the circuit is the main determining factor, especially the minimum and maximum values of VGS.

The P-channel MOSFET (PMOS) design requires a negative VGS to operate. If either VIN or IOUT increases, the control circuit responds by driving VGS more negative. In the PMOS case, VS = VIN. The maximum negative VGS occurs when driving VG to the negative rail; this corresponds to VGS = – VIN. Reducing the dropout voltage (VIN– VOUT) doesn’t affect the magnitude of VGS, making an LDO design possible.

For the N-channel MOSFET (NFET), the situation is reversed. If VIN decreases, the control circuit must drive VG more positive to increase VGS. The error amplifier can still drive VG between VIN and ground, but now VS = VOUT, so the maximum value of VGS is (VIN– VOUT)—in other words, the voltage across the FET.

A FET doesn’t even begin to turn on until VGS is greater than a threshold voltage of about 2 V. The NMOS circuit as shown limits the minimum dropout voltage to 2 V, not exactly LDO territory.

2. Here, the basic NMOS circuit is modified to enable LDO operation. Either a charge pump or an external bias voltage can be used. (Source: TI Training: LDO Basics—Dropout Voltage with Jose Gonzalez video)

Adding an internal charge pump between VIN and the error amp’s positive supply solves the problem (Fig. 2). The charge pump boosts the error amplifier’s positive rail, allowing it to drive a higher VGS, and making an LDO design possible. An alternative implementation supplies the higher positive rail via an external BIAS pin. Ultra-low-dropout LDOs such as TI’s new TPS7A10 use this approach.

As we’ve discussed, low noise is an advantage of a linear regulator or LDO, and many applications use an LDO to clean up the noisy output of a switching power converter. Two parameters in the datasheet characterize the LDO’s noise performance, so we’ll discuss these now.

Input Noise and PSRR

The power-supply rejection ratio (PSRR), also called the power-supply ripple rejection, describes how well the LDO rejects noise from an external source, such as a switching power supply, that appears on its input. The PSRR compares the output ripple and the input ripple of the LDO over the frequency range of interest for the application. Figure 3 shows the graph of PSRR vs. current for the TPS717, a low-noise, high-bandwidth PSRR LDO with 150-mA maximum output.

 

3. The TPS717 features a high-bandwidth, high-gain error loop gain for high PSRR. (Source: TI “TPS717” PDF)

PSRR is expressed in decibels (dB)—the higher the number, the better the rejection. The equation is:

A couple of factors can affect the PSRR. An increased load current affects both ends of the frequency range. The output impedance of the FET is inversely proportional to the drain current, so an increase in IOUT lowers the PSRR at lower frequencies because it lowers the error-loop gain. At the same time, increasing the load current moves the output pole to higher frequencies. This increases the feedback loop bandwidth, which increases the PSRR at higher frequencies.  

The PSRR also lowers when VDS drops below about 1 V because the FET begins to transition from the active (saturation) region of operation into the triode/ohmic region, which also causes the feedback loop to lose gain.

When comparing PSRR performance between LDOs, it’s important always to match VIN-VOUT and load currents. It’s also important to compare LDOs with identical VOUT, since PSRR is usually better at lower output voltages.

Output Noise and Spectral Noise Density

Of course, the LDO itself isn’t noiseless. It generates internal (intrinsic) noise that appears on the output and adds to the residual input noise discussed above. Several physical mechanisms contribute to intrinsic noise in LDOs and other components. 

Thermal noise occurs in both active and passive devices due to the random motion of charge carriers (either electrons or holes) in a conductor. Thermal noise is proportional to absolute temperature and is independent of current flow. It has the flat spectrum characteristic of white noise.

Flicker noise occurs only in active devices and varies by technology. Flicker noise is proportional to current flow and inversely dependent on frequency, hence its other name of 1/f noise. 

Finally, shot noise is caused by electrons or holes randomly crossing a potential barrier such as a PN junction. It’s also associated with current flow and has a flat frequency spectrum.

Intrinsic noise is expressed in µV√Hz; it’s characterized by an output spectral noise density graph in the LDO datasheet showing how the noise varies over frequency. Datasheets typically include several graphs that describe how the output noise varies with output current (IOUT), output voltage (VOUT), and other parameters.

Figure 4 shows the spectral noise density versus VOUT for the TPS7A91, a 1-A high-accuracy low-noise LDO with 200-mV maximum dropout voltage.

4. The spectral noise density of an LDO increases with VOUT at lower frequencies. (Source: TI Blog: “LDO basics: noise—part 1”)

The output noise is concentrated at the lower end of the frequency spectrum. Datasheets commonly provide a single noise value for comparison purposes. This value represents the output noise integrated from 10 Hz to 100 kHz and is expressed in microvolts root-mean-square (μVRMS). Using this metric, the TPS7A91’s output noise figure is 4.7 μVRMS.

Be careful when comparing components from different manufacturers. Some datasheets integrate the noise over another frequency range, such as 100 Hz – 100 kHz, or even use a custom range. Integrating over a select frequency range can help mask unflattering noise properties, so it’s important to examine the individual noise curves in addition to the integrated value.

To Improve LDO Noise Performance, Start with the Voltage Reference

How do we improve the noise performance of the standard LDO? Although every component in the LDO contributes to the overall noise, the voltage-reference circuit is the main culprit. There are two reasons: the circuit has many noise-generating active and passive components; and any noise on VREF is amplified by the error amplifier. Any input ripple that appears on the reference is also amplified and appears on the output; therefore, the bandgap reference must have high PSRR as well as low noise.  

The solution for both internal noise and PSRR is simply to add a low-pass filter (LPF) in series with the bandgap reference output.  As shown in Figure 5, this LPF is accomplished by adding an external capacitor CNR/SS to the existing internal resistor RNR/SS, and many low-noise LDOs include a pin for this purpose.

5. Adding a noise-reduction capacitor (1) and a feed-forward capacitor (2) can improve LDO noise performance. (Source: TI Blog: “LDO basics: noise—part 2,” Fig. 1)

The capacitor serves a dual purpose: when the LDO is initially powered up, the VREF voltage seen by the error-amplifier ramps up as CNR/SS charges through the internal resistor. In effect, the LPF adds a soft-start, hence the “NR/SS” subscript on the resistor and capacitor. This soft-start is helpful in preventing a current-limit condition if the LDO must charge a large-value output capacitance at startup.

The LPF isn’t a panacea. Adding it reduces the PSRR at low frequencies because it passes bandgap ripple in this range. A low-ESR ceramic output capacitor improves PSRR performance. The capacitance value should be chosen based on the frequencies that are important for the application. Careful board layout design will reduce the ripple feedthrough from input to output caused by board parasitics.

Another strategy to prevent the output noise from being amplified by the error amplifier is to add a feed-forward capacitor. CFF provides an ac bypass around resistor R1, leaving the dc gain unchanged but  reducing the gain at higher frequencies.

Adding a feed-forward capacitor has multiple effects on LDO performance, including improved noise, stability, load response, and PSRR. It also improves the phase margin of the feedback loop, enhancing the LDO’s load transient response with less ringing and faster settling time.

6. The TPS7A91 is an adjustable-output LDO optimized for low-noise operation. (Source: TI “TPS7A91” PDF)

7. Adding a feed-forward capacitor CFF improves the TPS7A91’s output noise performance at lower frequencies. (Source: TI Blog: “LDO basics: noise—part 2,” Fig. 2)

A feed-forward capacitor can’t be added to a standard three-terminal fixed-output LDO because the R1/R2 node is internal. Still, many LDOs designed for low-noise operation make this node available on a pin. Of course, adjustable-output LDOs such as the TPS7A91 in Figures 6 and 7 include a pin (FB) for external resistors and CFF. This device also features a CNR/SS pin.

The table provides a summary of the effects versus frequency of the two noise-reduction techniques discussed above: CNR and CFF.

Conclusion

The LDO is the dominant architecture for linear regulation. This article discussed the two main FET LDO architectures, the sources of noise, and reviewed techniques to help improve noise performance. For a deeper discussion of LDO noise, go here. PSRR is discussed in more detail here, and you can find out more about using an LDO feedforward capacitor in this application report.

Texas Instruments offers over 500 LDOs for different applications, giving the designer a broad range of features: low noise, wide VIN, small size, low Iq, and processor attach.

 Sponsored Resources: 

A Closer Look at the Common Emitter Amplifier and Emitter Follower

$
0
0

Usually targeted at audio applications, these circuit blocks find homes in all types of designs. Here’s a quick study of their operation and properties.

Two of the most often used amplifier building blocks in audio amplifier design are the common emitter amplifier with emitter degeneration and the emitter follower using the same circuit biasing. The emitter amplifier is the simplest and most often used tool to create voltage gain. The emitter follower is generally used as a voltage buffer and helps drive loads.

Here, both will be demonstrated through simulation using LTSpice and a simple breadboard prototype. Specifically, the examples will be prototyped with the 2N5401 PNP transistor (if necessary) and 2N5551 NPN transistor. In addition, for the input stimulus to the circuits, a 1-kHz waveform will be used which will be generated from YouTube and applied to the circuit through an AUX cable breakout circuit.

Quick Note on Power Supply Used in Tests

For these quick breadboard prototypes, a 12-V wall-wart power supply was chosen (Figs. 1 and 2). In general, wall-wart power supplies are readily available and cheap. Most people reading this probably have many of them laying around that may not be of use to them anymore. This makes them perfect as a source of power for breadboard prototypes.

1. Wall-wart power supply.

2. The supply was made in China!

 

However, one must understand that these power supplies are usually pretty noisy. By that I mean the output voltage is generated through a switch-mode power supply running at some specific frequency. This creates ripples on the voltage at the same frequency, which can be a source of distortion in circuits and probably cause problems if not given any thought.

Luckily, this noise can easily be filtered out. Passive RC filters, active filters, and many times LDOs are used to eliminate this noise reliably. Another oft-used simple method to help with the noise involves adding decoupling capacitors to the power supply. A quick glance at many circuits today and you can see that almost all ICs on printed circuit boards (PCBs) have small decoupling caps on each power-supply pin.

3 Decoupling caps are added to breadboard.

Figure 3 shows how adding decoupling caps to the breadboard can improve the integrity of the power-supply voltage. The oscilloscope screenshot in Figure 4 shows the supply connected directly to the oscilloscope probes (ac-coupled). In Figure 5, the same power-supply voltage with a single 1-µF capacitor is connected from VCC to GND. Notice that the noise on the power-supply line is greatly reduced when adding only a single capacitor.

4. Ripple on power-supply rail.

5. Power-supply rail with a 1-µF cap connected from VCC to GND.

Amplifier Circuit

The common emitter amplifier and emitter follower will be demonstrated using the same amplifier circuit. The main difference between the two will be where the output is taken. For the common emitter amplifier, we will take the output at the collector of the transistor. For the emitter follower, the output will be taken at the emitter of the transistor.

The emitter-follower amplifier, often used in amplifier output stages, is known as a class A amplifier. The class A amplifier consists of a transistor that’s always ON. Current is always flowing through the collector, and an output is produced for the entire 360-degree cycle of the input. It’s used to drive the load, which is commonly a speaker with an impedance of 2, 4, or 8 Ω.

To begin, we’ll simulate the circuit to be tested in LTSpice. LTSpice is a great tool for hobbyists, professionals, and any person interested in studying and understanding circuits. It’s free, easy to learn and use, and many resources are readily available on the web.

6. LTSpice schematic of the circuit to be tested.

Figure 6 illustrates the circuit that will be tested on the breadboard. We won’t go over the details of biasing the transistor, but want to note that the transistor was biased with the intent of having approximately 10 mA of current. Plots of the voltage output taken at the collector (Fig. 7) and the emitter (Fig. 8).

7. LTSpice plot of the output taken at the collector of transistor.

8. LTSpice plot of the output taken at the emitter of the transistor.

The plots reveal fundamental properties of the two amplifier types. First, as Fig. 7 shows, the amplifier has a gain of approximately 2 when the output is taken from the collector. Also, the output is close to 180 degrees out of phase from the input. If the small signal gain of the circuit is derived, this will also be shown by the mathematical transfer function. The gain depends on the collector and emitter resistors; a good approximation is collector resistance divided by emitter resistance.

On the other hand, when the output is taken at the emitter (Fig. 8, again), the signals are close to in-phase with one another and the gain is approximately unity. Once again, this also matches what would be shown by the small signal transfer function.

9. Class A amplifier breadboard prototype.

Next, the breadboard circuit is built and tested (Fig. 9). Figs. 10 and 11 show the plots taken on the oscilloscope for the corresponding outputs at the collector and the emitter, respectively.

10. Plot showing the input and output signals taken at the collector.

11. Plot showing the input and output signals taken at the emitter.

Looking at the plots, we can see that the waveforms closely match the expected simulated waveforms. It’s good practice to first design your circuits on paper if possible, then simulate the circuits, and finally prototype and test your circuits and see how close they match your expected results.

We can see from the plots in Figs. 10 and 11 that the simulations closely match the measured results in both amplitude and phase. For simplicity, the paper calculations have been left out of the documentation. However that doesn’t mean they’re not important. If your circuit doesn’t work on paper, it will not work in simulation or on the breadboard. Paper calculations should always be the starting point and, in my opinion, are the most important part of the design.  

Wrap Up

We have presented two different types of building blocks commonly used in audio amplifier design. The amplifiers serve two different purposes, with one being a voltage gain block and the other being a unity gain voltage buffer. These were both derived, simulated, and prototyped with closely matching results. In addition to audio, these circuit blocks find use in all types of applications. A good understanding of the amplifiers’ operation and properties will help all circuit designers from many different disciplines.

Optoelectronic Oscillators Charge mmWave Synthesizers

$
0
0

These X- and K-band synthesizers apply optoelectronic technology to achieve high levels of frequency stability with reduced phase noise.

Download this article in PDF format.

Reducing noise in higher-frequency oscillators is one way to achieve reliable, high-data-rate communications, although noise tends to rise with increasing frequency. All sorts of oscillators and frequency-synthesis techniques have been applied in recent years in attempts to trim phase-noise levels at microwave frequencies. Many of these approaches have been electrical in nature.

Taking a different tack, Synergy Microwave Corp. and Drexel University jointly developed a line of frequency synthesizers that leverage optical circuit techniques to help achieve lower-noise microwave signals at X- and K-band frequencies. In these low-noise frequency synthesizers, optoelectronic transmission lines and optoelectronic oscillators (OEOs) are part of the solution for reducing both close-in and far-from-the-carrier phase noise in microwave signal sources.

The latest line of low-noise frequency synthesizers from Synergy Microwave Corp.(Fig. 1) builds upon OEOs to produce tunable, low-noise output signals at X- and K-band frequencies. They use long fiber-optic delay lines to reduce phase noise close to the carrier. The synthesizers also employ self-phase-locked-loop (SPLL) and self-injection-locking (SIL) techniques to reduce phase noise otherwise located close-in and far-from the carrier, respectively.

1. Shown in two views, this line of rack-mountable frequency synthesizers employs different techniques and technologies to trim phase noise at X- and K-band frequencies.

The X- and K-band frequency synthesizers are truly subsystem designs, incorporating several different technologies to provide variable-frequency output signals with low phase noise. The frequency synthesizers are suitable for applications in test systems, wireless-communications systems, radar systems, and remote-sensing systems, or wherever microwave receivers require high sensitivity not limited by phase noise.

The frequency synthesizers orchestrate SIL and double-sideband PLL techniques simultaneously, with multiple signal paths within the synthesizers in support of enhanced signal stability as well as application of modulation as needed. Signals are combined within the synthesizer with the aid of a custom-designed, double-balanced frequency mixer and lowpass-filter-amplifier (LPFA) assembly. The synthesizer design also incorporates operational-amplifier (op-amp) circuits that work as the phase detector and lowpass portion of the PLL.

Very Much in Tune

The high resolution and wavelength-sensitive tuning is due to the fine tuning made possible by an optical transversal filter. The optical filter uses a chirped fiber Bragg grating (CFBG) as a dispersive component to achieve narrowband filtering. A current-tuned YIG filter is used in cascade along with the optical filter and CFBG to provide coarse frequency tuning across wide tuning ranges of X- and K-band frequencies.

At X-band frequencies from 9 to 11 GHz, for example, the YIG filter tunes with a response of about 25 MHz/mA. Since the resolution of the current supply feeding the YIG filter is about 1 mA, the effective frequency-tuning resolution of the YIG filter is 25 MHz. This combination of optical and electronic technologies results in relatively wide frequency-tuning ranges with outstanding phase noise, both close in and far from the carrier.

Case in point: at X-band, the single-sideband phase noise is −109.97 dBc/Hz offset 1 kHz from the carrier and −136.45 dBc/Hz offset 10 kHz from the carrier for carrier frequencies from 9 to 11 GHz (Fig. 2). In the time domain, this translates to 4.395 fs measured at sidemode markers of 35 and 200 kHz from the carrier. At higher K-band frequencies, the SSB phase noise is −102.30 dBc/Hz offset 1 kHz from the carrier and −127.37 dBc/Hz offset 10 kHz from the carrier, or time-domain response of 6.961 fs measured at sidemode markers of 35 and 200 kHz from the carrier.

2. The low phase noise at X-band frequencies is evident both close to and far from the carrier.

In terms of size and power, the YIG filter is the dominant component in these optoelectronically driven frequency synthesizers. The synthesized signal source can be contained in a 19-in. rack-mount enclosure for portability.

The main current consumption in the frequency synthesizer assembly takes place due to the YIG filter, which draws 150 mA at +10 V dc and about 1.5 W power. The amplifier, with two channels, draws 80 to 160 mA currents at +10 V dc and as much as 1.6 W power. The mixer LPFA, which uses a combination of frequency translation and filtering to extract the RF/microwave signals from higher-frequency optical signals, draws about 60 + 5 + 45 mA, or 110 mA current, from respective supplies of +15, +5, and −5 V dc.

In stark contrast, the photodetector used in the frequency synthesizer operates at very low current and power, with its three cells each drawing about 10 mA current or 30 mA current at +5 V dc and about 0.15 W power as part of the frequency synthesizer dominated in terms of size and power by the YIG filter. The broadband dual-channel amplifier draws roughly 80 mA current per channel or 160 mA current from a +10-V dc supply, or about 1.6 W total power as part of the frequency synthesizer.

Synergy Microwave Corp., 201 McLean Blvd., Paterson, NJ 07504; (973) 881-8800, FAX: (973) 881-8361, e-mail: sales@synergymwave.com.

Bibliography

U. L. Rohde, A. Poddar, and A. Daryoush, “Self-injection locked phase locked loop optoelectronic oscillator,” U.S. Patent No. US9088369B2, 2012.

U. L. Rohde, A. Poddar, and A. Daryoush, “Integrated production of self-injection locked self-phase locked opto-electronic oscillators,” U.S. Patent No. US9094133B2, 2013.

T. Sun, L. Zhang, A. K. Poddar, U. L. Rohde, and A. S. Daryoush, “Frequency synthesis of forced opto-electronic oscillators at the X-band,” Chinese Optical Letters, Vol. 15, 010009 (2017).

L. Zhang, A. K. Poddar, U. L. Rohde and A. S. Daryoush, “Self-ILPLL Using Optical Feedback for Phase Noise Reduction in Microwave Oscillators,” IEEE Photonics Technology Letters, Vol. 27, No. 6, March 15, 2015, pp. 624-627.

L. Zhang, A. Poddar, U. Rohde, and A. S. Daryoush, “Phase noise reduction and spurious suppression in oscillators utilizing self-injection loops,” 2014 IEEE Radio and Wireless Symposium (RWS), Newport Beach, CA, 2014, pp. 187-189.

L. Zhang, A. K. Poddar, U. L. Rohde, and A. S. Daryoush, “Comparison of Optical Self-Phase Locked Loop Techniques for Frequency Stabilization of Oscillators,” IEEE Photonics Journal, Vol. 6, No. 5, October 2014, pp. 1-15.

T. Sun, L. Zhang, A. K. Poddar, U. L. Rohde, and A. S. Daryoush, “Forced SILPLL oscillation of X- and K-band frequency synthesized opto-electronic oscillators,” 2016 IEEE International Topical Meeting on Microwave Photonics (MWP), Long Beach, CA, 2016, pp. 91-94.

L. Zhang, A. Poddar, U. Rohde, and A. Daryoush, “Analytical and Experimental Evaluation of SSB Phase Noise Reduction in Self-Injection Locked Oscillators Using Optical Delay Loops,” IEEE Photonics Journal, Vol. 5, No. 6, December 2013, pp. 6602217-6602217.


Can’t Afford an Atomic Clock? Get a Molecular One!

$
0
0

Using the THz-range resonance of a molecule rather than of atoms led to the development of a clock with nearly the performance of atomic clocks but fabricated as an IC.

Atomic clocks based on stimulated resonance of cesium 133 or rubidium atoms are the most accurate clocks in relatively widespread use (there’s one in each GPS satellite), but they’re costly and relatively large. Even the chip-scale atomic clocks for specialty applications such as military-mission synchronization in the field cost on the order of $1,000.

However, a team at MIT’s Department of Electrical Engineering and Computer Science (EECS) working with their Terahertz Integrated Electronics Group (TIEG) has developed a clock that’s almost as good. It’s built as an IC, with corresponding reduction in size, power, and cost.

Their “molecular” clock relies on measuring the rotation of molecular carbonyl sulfide (OCS), when exposed to certain frequencies; that’s why it is called “molecular” and not “atomic.” (Note: carbonyl sulfide—more formally, 16O12C32S—is a chemical compound with other often-used designations and abbreviations as well.)

Measurements show that the molecular clock had an average error of less than one microsecond per hour, which is comparable to miniature atomic clocks and four orders of magnitude more stable than mid-range crystal-oscillator clocks such as those used in smartphones. Even better, this clock is fully electronic, doesn’t require bulky or power-hungry support components to insulate and excite the molecules, and is fabricated using standard CMOS IC processes (see figure).

Some perspectives on the design, implementation, and measurement results of the MIT-developed chip-scale, silicon molecular clock. (Source: MIT)

Their report, “An on-chip fully electronic molecular clock based on sub-terahertz rotational spectroscopy” published in Nature Electronics, describes the principles, design, and operation, as well as tests. It also explains what advances this topology required in addition to the concept.

OCS and Terahertz Frequencies

To create the needed molecular resonance, which is the basis for the approach, they attached a “gas cell” filled with OCS. The IC generates and sweeps a variable-frequency terahertz (THz) signal across the cell that incites the molecules to start rotating. At the same time, a THz receiver measures the energy of these rotations and adjusts the THz oscillation frequency via a closed-loop arrangement. The OCS molecules reach peak rotation and show a sharp signal response very close to 231.060983 GHz, which is their “natural” resonance frequency. The THz source clock at this resonance frequency is then divided down to generate standard one-pulse-per-second timing pulses.

The team had to develop a controllable, tunable THz source. It was accomplished by utilizing custom metal structures plus other components to enhance the performance of on-chip transistors used to upconvert the initial low-frequency input signal into THz output. They also wanted to consume as little power as possible; the device dissipates just 66 mW.

The paper’s authors conclude that “Our work demonstrates the feasibility of monolithic integration of atomic-clock-grade frequency references in mainstream silicon-chip systems.” In addition to the paper’s expected references on molecular clocks and OCS, the paper contains many interesting historical and tutorial references on the design, technology, and construction of full-size and chip-scale atomic clocks.

The work was supported by a National Science Foundation CAREER award, MIT Lincoln Laboratory, MIT Center of Integrated Circuits and Systems, and a Texas Instruments Fellowship.

15 Preparation Ideas for Your Next IoT Edge Project

$
0
0

Based on actual knowledge gained from other successful IoT edge device projects.

Before starting your IoT edge device development process, it is wise to spend time preparing for your new project. Planning before you start will limit frustration and save you time and money in the long run. Before diving into the task, study the 15 preparation considerations in this white paper.

How to Create an IoT Edge System Prototype

$
0
0

Once a successful proof-of-concept has been established, what’s next?

The day after creating a proof-of-concept of your custom IC, you might ask “Now what do we do?” It is time to make your idea real and start developing a system prototype. This white paper covers developing the system prototype of an IoT edge device that contains a sensor, supporting electronics, and software running on an embedded processor.

Capacitor Basics and Their Uses in Power Applications

$
0
0

Capacitors play key roles in the design of filters, amplifiers, power supplies and many additional circuits. Here's a brief guide to the different types and the applications they're best suited for.

Out of all of the fundamental passive electronic components, capacitors are arguably the most abundantly used. In fact, it is hard to find a circuit board that does not have a capacitor on it and a circuit that does not use a capacitor.

Capacitors  play key roles in the design of filters, amplifiers, power supplies and many additional circuits. There are many different types of capacitors, each with its own advantages and disadvantages. The following paragraphs will review some of the important characteristics of different types of capacitors and their potential applications.

Figure 1. Multiple electrolytic capacitors.

All capacitors fundamentally do the same thing, which is that they store charge. Capacitance is a way to quantify or measure a capacitor’s ability to store charge. Capacitance is measured in Farads which is equivalent to one Coulomb per Volt or one Amp-Second per Volt (F=C/V=A*Sec/V). Farad is also a nod to the great British scientist Michael Faraday.

Most capacitors are available in leaded and unleaded packages. Surface mount components are the most commonly used. In fact, surface mount capacitors are so popular that there is an industry-wide shortage. For some applications involving high voltage and mains lines, through-hole components offer an advantage due to their higher capacitance and higher power handling abilities.

Capacitors are made up of different types of dielectric materials, including ceramic, electrolytic, tantalum, polyester, and polypropylene. The choice of materials obviously impacts key characteristics and performance.

Figure 2. Surface mount and leaded ceramic capacitors

Ceramic capacitors find use in all applications operating from DC to RF. They are capable of handling high voltages and generally have low equivalent series resistance (ESR) and equivalent series inductance (ESL). They are limited by their achievable capacitance value, but they do happen to be the least expensive type.

One the most widely used applications for ceramic capacitors is decoupling or bypassing on a power supply pin of an integrated circuit (IC), keeping any stray RF signals out of the voltage supply. Basically, the capacitor provides a low impedance path for the higher frequency signals to travel, which is a shunt path to ground. This keeps those signals from entering the IC and wreaking havoc and helps provide a more constant DC level of voltage. At times when the voltage supply may dip below the needed supply, the decoupling caps can temporarily provide the required voltage.

On the other hand, electrolytic capacitors are capable of providing engineers with a much higher capacitance value. These capacitors can be found in many power electronics and in circuits with high amounts of power consumption. One example where electrolytics offer an advantage is the reservoir capacitor in power supplies.

Figure 3. Meanwell power supply with multiple types of capacitors

In the picture shown above, the reservoir capacitor is shown in the upper left. Its purpose is to smooth out the rectified voltage from the power lines. Some level of ripple remains after this smoothing function, and as a result, the capacitor must have a sufficient ripple current rating. The ripple current includes the capacitor discharge current and the capacitor charging current. These capacitors generally are a very large value and also must be chosen to have a much larger time constant than the AC line’s frequency. Consideration must be taken here to avoid using a capacitor that is too large or too costly.

Like electrolytics, tantalum capacitors are generally polarized and offer lower leakage currents. Due to the tantalum material used in their construction, they can offer higher capacitance values in smaller package sizes than other choices. They also have better stability characteristics, meaning their capacitance is less likely to change over time. Disadvantages include a higher price and also the way they can fail.

When electrolytics fail, they tend to fail open. In other words, they appear to become an open circuit. When tantalum capacitors fail, they tend to create a short circuit and in some instances can even burn. Consequently, using them responsibly requires fail-safe circuits to prevent any damage to the circuits. Tantalum capacitors find use in many military and medical applications.

In conclusion, capacitors are one of the most widely used electrical components. When designing circuits, the choice of capacitor must not be overlooked. Characteristics such as voltage rating, ripple current rating, size and cost must always be taken into consideration.

To help with capacitor selection, many manufacturers offer online tools available to help navigate and find the right capacitor for the application. It is wise to use these tools as they will simplify the selection process. Just as there is a right tool for every job, there is a right capacitor for every application.

TI Precision Labs - ADCs: DC Specifications

$
0
0

This video highlights the key DC specifications of analog-to-digital converters (ADCs) including input capacitance, leakage current, input impedance, reference voltage range, INL, and DNL.

Viewing all 582 articles
Browse latest View live