Quantcast
Channel: Electronic Design - Analog
Viewing all 582 articles
Browse latest View live

AVX Buys Ethertronics in Move from Passive Components to Active Antennas

$
0
0

AVX Corporation, one of the largest makers of passive components and interconnects, has agreed to acquire Ethertronics, whose advanced antenna systems are installed in everything from connected cars to medical equipment.

As part of the deal, the company is paying $142 million in cash as well as assuming $8 million of debt.

Ethertronics supplies both passive and active antenna systems, precisely engineered to suppress local interference or changes in ground planes. The company’s chips plot out radiation patterns every millisecond and then its algorithms sample and switch between them to optimize for reliability, range and efficiency based on the surrounding environment.

Ethertronics retails isolated magnetic dipole antennas for many different applications like personal computers and phones. It has expanded into the market for connected cars, which share their location and other information with each other and the cloud. The company also built an advanced test chamber specifically for the automotive applications of its chips.

For AVX, the acquisition expands its product portfolio, which is dominated by passive components like tantalum and ceramic capacitors. Ethertronics, which is privately owned, generated around $90 million of revenue over the last year and employs around 700 people. The company was founded in 2000 and is based in San Diego, California.

“The addition of Ethertronics is an exciting opportunity for AVX as we expand our extensive electronic product offering into a new arena,” said John Sarvis, chief executive officer of AVX Corporation, in a statement. “The combination of AVX and Ethertronics offers exciting growth potential for the years ahead.”


Simple Generator Provides Very-Low-Frequency/Distortion Sine and Square Waves (.PDF Download)

$
0
0

The circuit of Figure 1 generates sinusoids down to very low frequencies with distortion in the region of 3% or less, yet has no feedback or gain-stabilizing components because none are needed. It uses a phase-shift oscillator with a low-pass phase-shift network configuration rather than the more-common high-pass network.

The frequency-determining network is used as a low-pass filter to remove most of the harmonic content of the output waveform. Other phase-shift oscillators using low-pass networks have been published, but most have been more complicated (some of them far more so).

1. This very-low-frequency sine- and square-wave generator requires few components, but provides low-distortion outputs (derived from LTspice simulator).

The output of op amp U1 is applied to the first section of the phase-shifting network via R1 and C1. Each stage of the network attenuates more of the harmonic content (along with some of the fundamental). The final sinusoidal waveform is fed back into U1, which is operating open-loop and thus generates square waves at its output. It’s also fed into U2, which is operating as a linear amplifier to restore the low-level sinusoid up to a more-usable level at low impedance.

Simple Generator Provides Very-Low-Frequency/Distortion Sine and Square Waves

$
0
0

With a handful of low-leakage passive components and two high-impedance op amps, you can build a sine/square-wave generator with output period of a minute and more.

Download this article in PDF format.

The circuit of Figure 1 generates sinusoids down to very low frequencies with distortion in the region of 3% or less, yet has no feedback or gain-stabilizing components because none are needed. It uses a phase-shift oscillator with a low-pass phase-shift network configuration rather than the more-common high-pass network.

The frequency-determining network is used as a low-pass filter to remove most of the harmonic content of the output waveform. Other phase-shift oscillators using low-pass networks have been published, but most have been more complicated (some of them far more so).

1. This very-low-frequency sine- and square-wave generator requires few components, but provides low-distortion outputs (derived from LTspice simulator).

The output of op amp U1 is applied to the first section of the phase-shifting network via R1 and C1. Each stage of the network attenuates more of the harmonic content (along with some of the fundamental). The final sinusoidal waveform is fed back into U1, which is operating open-loop and thus generates square waves at its output. It’s also fed into U2, which is operating as a linear amplifier to restore the low-level sinusoid up to a more-usable level at low impedance.

With three 2.2-MΩ resistors and three 1-µF capacitors, the circuit generates a sinusoid waveform at about 0.174 Hz. (Note that a three-gang variable-tuning capacitor can be used here to create an inexpensive, adjustable-frequency audio oscillator.)

The circuit has a quick startup of a few cycles regardless of frequency, and its output amplitude is very stable. U1 doesn’t need to deliver output rail-to-rail, but it must clip on overload symmetrically; if it does not, then even-order harmonic components will appear in the output. If this circuit also had a true triangular-wave output, it would be called a function generator. However, this simple configuration doesn’t have that output waveform.

By increasing the values of the network components, this circuit can generate sinusoids with periods of the order of a minute or more, which largely depends on the characteristics of the components. With the high resistor values shown, the op amps must be CMOS devices in order to have the necessary extremely high input impedance, and the timing capacitors must also have very low leakage. Figure 2 shows the circuit’s waveforms at critical points.

2. The waveforms at four critical points include the output of U1 (the square wave) and the junction of R1 and C1 (the “triangular” wave) (a); the waveforms at R1/C1 and R2/C2 (b); the waveform at R2/C2 and the sinusoidal waveform at R3/C3 (c); and the square-wave and sine-wave outputs from this simple generator (d).

A possible negative aspect of this circuit is that it requires a second op amp as a buffer with gain to provide the sinewave output. That second op amp has the same high input-impedance requirement as the first one. Although shown as a generator of very low frequencies, the upper-frequency range of this circuit is limited only by the gain-bandwidth product characteristics of the chosen op amps. By using available wideband op amps, it will operate into the upper parts of the audio region without a serious increase in distortion.

Jim Tonnehas been involved with electrical-circuit design and analysis, largely in the broadcast industry, since 1965.  His specialties are filter design and speech processing, and he has been a regular contributor to the ARRL Handbook as well as QEX, their Forum for Communications Experimenters. His website is www.TonneSoftware.com and can be reached at Sales@TonneSoftware.com.

At CES 2018 Powercast, Energous to Demo Wireless Charging at a Distance

$
0
0

True wireless charging—delivering power to your mobile device over the air up to 80 ft. away—has been approved by the FCC, and will be on display in Las Vegas.

While you were busy boxing up that ugly sweater Aunt Agnes gave you for Christmas, not one but two companies were announcing that the FCC had okayed their wireless power-at-a-distance charging solutions.

On Dec. 26th Powercast Corporation  said that it will unveil at CES (booth #40268) an FCC- (Part 15) and ISED-approved (Innovation, Science, and Economic Development, Canada) three-watt PowerSpot transmitter which can deliver over-the-air charging to multiple electronic devices from a few inches to 80 ft. away, and that charging mats or direct line of sight are not needed.

Up until this point wireless charging has been very short range, based on either Qi or Powermat standards, where the device being charged generally must sit on a charging pad (and often at a very specific spot) that’s plugged into an outlet.

The Powercast PowerSpot transmitter, on the other hand, sends RF energy on the 915-MHz ISM band over the air to a receiver chip embedded in a device, which converts it to DC to recharge its batteries or directly power the device. This remote charging technology behaves like Wi-Fi, where enabled devices automatically charge when within range of a power transmitter.

1. The PowerSpot charging zone ranges from a few inches up to 80 ft.

Charging rates will vary with distance, type, and power consumption of a device. An Illuminated LED indicates devices are charging, and it turns off when they’re done. Audible alerts indicate when devices move in and out of the charging zone. The PowerSpot transmitter uses Direct Sequence Spread Spectrum (DSSS) modulation for power and Amplitude Shift Keying (ASK) modulation for data, and includes an integrated 6-dBi directional antenna with a 70-deg. beam pattern.

According to Powercast, power-hungry devices like game controllers, smart watches, and fitness bands will charge best up to 10 ft. away; low-power devices like home automation sensors (temperature, etc.) can be charged up to 80 ft. away. The company  expects that up to 30 devices left in the zone on a countertop or desktop overnight can charge by morning, sharing the transmitter’s three-watt (EIRP) power output (Fig. 1).

Powercast’s Lifetime Power Energy Harvesting Development Kit for Battery Recharging is a demonstration and development platform for recharging batteries wirelessly from RF energy. It is designed to be used with an app and is configured for out-of-the-box operation. The battery recharging boards utilize the P1110B Powerharvester Receiver, which converts RF energy into DC power. Either the PowerSpot transmitter or TX91501-3W transmitter is the source of RF energy, with both operating at 915 MHz. Other RF energy sources operating from 850-950 MHz can also be used as power sources (UHF RFID readers, for example).

The kit includes a PowerSpot RF transmitter (with plastic mounting clips), a TX91501 transmitter, an RF energy harvesting evaluation board (P1110-EVB) with two antennas, a smartwatch shaped battery recharging board, a credit card-shaped battery recharging board, two coin cell (CR2032) -sized radio boards, and the necessary power adaptors.

The kit also includes two BLE radio boards that can be attached to the battery recharging boards. When used in conjunction with the watch or credit card battery recharging boards, the measured charge current and battery voltage are transmitted via Bluetooth to a mobile device and displayed in the included app. This enables the user to test both configurations on battery recharging boards simultaneously. The P1110-EVB is included in the kit, as well. It contains an evaluation board and antennas to test and develop with the P1110B Powerharvester Receiver. Like the Battery Recharge Boards, it converts RF energy into DC power which can be stored in a battery or capacitor, or else used to directly power a circuit. It includes two different antenna configurations.

Powercast will begin production of its standalone PowerSpot charger now that it is FCC-approved, and is also offering a PowerSpot subassembly to consumer goods manufacturers who want to integrate it into their own products (think lamps, appliances, set-top boxes, gaming systems, computer monitors, furniture or vehicle dashboards).

Also on December 26th, Energous Corp., the developer of WattUp charging technology, announced that it had received “Industry-First FCC Certification for Over-The-Air, Power-At-A-Distance Wireless Charging.” The FCC determined that the WattUp RF beam forming based wire-free at-a-distance charging is safe and meets the current regulatory health and safety guidelines established by the FDA (and enforced by the FCC).

The WattUp Mid Field transmitter sends focused, RF-based power to devices at a distance. The company claims it is the first FCC certification for power-at-a-distance wireless charging under Part 18 of the FCC’s rules; Part 18 rules permit higher-power operations than are permitted under the Part 15 rules that were used to approve Powercast’s at a distance charging devices.

WattUp can be applied to devices of greater or lower power, transmitting at distances of a few inches to 3 ft. or greater (Fig. 2). The WattUp system also is designed to charge devices at the point of contact as well as charge multiple devices at once, automatically charging devices as needed, until they are topped off. This “untethered, wire-free charging” can potentially enable charging a fitness band even while you are wearing it.

2. A WattUp transmitter sends energy via an RF)signal to WattUp-enabled electronic devices when requested. A WattUp receiver in each device converts that signal into battery power.

WattUp Mid Field and Far Field Transmitters sense and communicate to authorized receiver devices via Bluetooth Low Energy (BLE), only sending power when needed and requested by those devices. WattUp is software-controlled, determining which devices receive power, when, and in what priority. If there are no authorized devices within range, the WattUp transmitter becomes idle, and no power is wasted.

WattUp uses the 5.850 GHz-5.875 GHz band for the transmission of power. This is just outside of the 5.8 GHz Wi-Fi band. Other technical specifications of the WattUp charging solution include:

  • A GaN-based 5-10 W RF receiver IC
  • A GaN-based 10-15 W RF power amplifier (PA)
  • An RF-based charging solution allowing for full 2D / planar movement
  • Support for 90-deg. charging angles (sideways charging)
  • Accommodation of metal and other foreign objects
  • PA integration into the overall system leading to a lowered BOM cost

Energous will be demonstrating its WattUp technology at CES 2018, January 9-12 in Las Vegas.

The Front End: Filtering or Bandlimiting?

$
0
0

The answer will likely come down to having to deal with noise control, or stopping frequencies altogether to avoid reception collapse.

“Filters are generally used to limit the bandwidth of signals, right? So filtering and bandlimiting are synonymous, right?”—say any number of engineers, after a filtering presentation or lecture.

I love it when people try to catch me out. No, really. When you’re young, the gaps in your knowledge are empty; just voids waiting to be filled. When you’re old—though I prefer the epithet “experienced”—the gaps can still be there. But like the huge regions of non-coding DNA in chromosomes, these gaps are sometimes filled by junk sequences that look like knowledge, but are just nonsense. So when you’re older and experienced, you have to learn in a different way, which is to expose your (supposed) knowledge to brighter, younger minds to see if they can find the stuff you think you know, but don’t.

Okay, way off topic, except to say that for many years, I was so close to my work on filters that I never really stopped to think why I would get so riled when someone said, “Oh, I put a little RC in to filter out some noise” and I would think, “First-order RC?  That’s NOT a filter!”

But if it’s not a filter, what is it? And if it isn’t filtering, what is it doing?

Here’s how I think of it. Filtering and bandlimiting are both processes that affect the frequency response of a circuit or a block in a system. “Frequency response” is common shorthand for how the gain of that circuit or system varies with frequency. It’s not a single value; rather, it’s a function of frequency, often plotted graphically to aid visualization. Bandlimiting, as a term, means doing something to limit, or constrain, the bandwidth of that block. And bandwidth is a single, scalar measure—it’s a simpler concept than frequency response as it’s just a number.

Filter vs. Bandlimiter

So for me, the distinction between a filter and a bandlimiter is this: A filter has an intentional effect on the frequency response of the block, whereas a bandlimiter has an intentional effect on the bandwidth of the block. I’ve written before that if you leave out (or misjudge) bandlimiting, performance will suffer; if you leave out filtering, function will suffer. Let me give a simple example.

Imagine a world where there’s only one radio station. Your receiver is a long, long way from its transmitter, and you only have a very approximate idea of the frequency upon which they transmit their programmes (that’s the British way of spelling this particular use of the word). 

For this example, let’s assume that they are using AM. If you erect a broadband antenna and connect it to a suitable detector, chances are that you’ll be able to capture their signal. But you’ll also collect innumerable sources of noise thrown at you by the rest of the Universe. This noise will be spread over the entire frequency range over which your antenna and detector are active, and it just might drown out what you wanted to hear from the radio station.

Once you’ve got a somewhat better idea of what frequency the radio station broadcasts on, you can limit the bandwidth of your receive channel to a progressively narrower—yet not particularly well-defined—band around that frequency. The narrower you make the bandwidth, the more of the Universe’s interference you’ll be able to keep from messing up your reception. The quality of your reception is likely to be a smooth function of the reduction in bandwidth that you impose. The rate at which the attenuation of your bandlimiting circuit increases at frequencies away from your center frequency need not be great. 

What’s important is the noise bandwidth of the circuit, which you can calculate by integrating the noise power that makes it through the bandlimiter, over frequencies from about zero to about infinity. The only real noise power contributions are those fairly close to the center frequency, and even with only modest components, you can quickly get that bandwidth down to suitable levels.

Now imagine that a second radio station (and a third, and so on…) starts up, pretty close in frequency to the original one. A new problem emerges—your antenna and receiver can pick up both stations at once, and with AM, you end up listening to the sum of their broadcasts. Adjusting the bandwidth of your system with simple components doesn’t help you here, because no matter how small you make the bandwidth, the resulting circuits don’t have the discrimination to separate the transmissions of the two stations. 

The problem you now have is not one of performance—how much noise is there in the background. It’s a functionality issue: How do you receive the wanted transmission and not the unwanted one? In more modern communication systems, the reception could well be completely broken.

In this case, a bandlimiter isn’t good enough. You need… a filter.

Becoming More Receptive

The filter design task is a little more complicated. You have to specify both the range of frequencies that you really want (passband) and the range of frequencies that you don’t want that must be rejected (stopband). The whole arsenal of filter design is intended to help you with this problem.

As you make this filter progressively sharper, you may well find that, suddenly, the system starts working properly and you can hear what you want. Definitely not a smooth function.

Now, when you’ve built this filter, it also has a bandwidth, defined by the range of frequencies you decided to let through. You can perform the same integration over frequency as you did before to get this single scalar result. But in this particular example, you didn’t design the filter to have that bandwidth; it’s a side effect.

That’s a pretty long-winded way of expressing what I hope is actually a fairly simple point. If you just have to control the amount of noise passing through your system in a way that can be captured by a scalar figure of merit, then bandlimiting is what you need. That’s what all of the little RC circuits do in innumerable analog signal chains you’ll see in daily engineering life. They control the amount of high-frequency noise that gets into the input of an ADC, alias down, and make your system look noisier than you thought it should be. They dim down the high-frequency frizz on an output from your system that, even if it doesn’t cause a problem, makes it hard to see what’s happening at lower frequencies when you probe with a scope that’s faster than you really need (which is basically all scopes these days).  These are the little tweaks that help us get our performance right.  And I still consider that they’re not really filters.

When your system will simply break unless you can stop certain frequencies from passing through a block, well, that’s when you need a filter—not just a few non-critical Rs and Cs.  And that’s where life gets interesting. For a Filter Wizard, anyway!

Chief Executive of Texas Instruments, Rich Templeton, to Step Down

$
0
0

Texas Instruments said that its chief executive, Rich Templeton, is stepping down after more than 13 years in the captain’s chair of the company he turned into the largest manufacturer of analog chips. On Thursday, the company chose Brian Crutcher, its chief operating officer, to replace him.

The switch from Templeton to Crutcher is scheduled to take place on the first of July. Crutcher, who joined Texas Instruments in 1996, has been in charge of business operations and global manufacturing since he was promoted to chief operating officer last year. He previously presided over the company’s analog and digital light processing businesses.

Choosing Crutcher seems meant to preserve the stability of Texas Instruments, the largest maker of analog chips that handle physical signals flowing inside everything from cars to the industrial machines that put them together. The company’s chips are inside tens of thousands of different electronic devices, and that has effectively turned its financial results into a thermometer for the economy.

The Dallas, Texas-based company has been supplying more analog chips as cars and industrial machines are equipped with computers to make them smarter and more efficient. The company’s revenue has spiked more than 10% every quarter for three straight quarters. It expects to report fourth quarter revenue next week between $3.57 billion to $3.87 billion, up from $3.41 billion the year before.

“The directors have had a number of years to assess Brian’s ability, results and style, and we are highly confident he is TI’s next great leader,” Wayne Sanders, lead director of the Texas Instruments board and its chairman of governance and stockholder relations, said in a statement. “There could not be a better time for an effective transition.”

Templeton is known for ushering Texas Instruments into analog. He was chief executive when, in 2011, Texas Instruments acquired National Semiconductor for $6.5 billion in cash. He also wound down the company’s wireless operations, allowing it to plough cash into embedded and signal processing chips.

“He has been a transformative C.E.O., one of the industry's finest,” said Sanders. The 59-year-old Templeton has worked at Texas Instruments since 1980, the company said. He became president and chief executive in May 2004 and will remain as chairman of the board after he steps down.

Hall-Effect Sensing Makes Sense in Building Automation, Portables (.PDF Download)

$
0
0

The main trends in today’s building automation and personal electronics market include miniaturization and portability, along with increasing intelligence and lower costs. Achieving these trends, however, entails both challenges and tradeoffs. One of the most promising technologies for this market is Hall-effect sensing.

The Hall-effect principle is a popular magnetic sensing technology invented by Edwin Hall in 1879. The principle is simply a differential voltage generated in a current-carrying conductor, usually called a Hall element, when applying a perpendicular magnetic field to the conductor. The voltage is a result of a Lorentz force caused by the applied magnetic field, which causes the current electrons to concentrate in one end of the conductor and generate a potential difference between the two ends.

1. The voltage is usually referred to as a Hall voltage (VH).

Figure 1 illustrates the Hall-effect principle and the relationship between the current, voltage, and magnetic field. The generated voltage is usually referred to as a Hall voltage (VH). When a magnetic field isn’t present, the distribution of current electrons is uniform and VH is zero. When a magnetic field is applied, the distribution of current electrons is disturbed and will result in a non-zero VH proportional to the cross product of the current and the applied magnetic field. The current is usually fixed, resulting in a direct relationship between VH and the applied magnetic field.

VH has a very small value (on the order of few microvolts per volt of the supply voltage used to generate the Hall current per millitesla of the applied magnetic field (µV/Vs/mT). To make this voltage value usable, an amplification stage amplifies the differential voltage and rejects any common-mode signal, which is why you need a differential amplifier for this stage.

Figure 2 presents the typical components of a Hall-effect sensor. Different Hall-effect sensors use the output of the differential amplifier in numerous ways to achieve various functions. For more details about Hall-effect sensors and other magnetic sensor technologies, check out Texas Instruments’ (TI) white paper on the subject.

Hall-Effect Sensing Makes Sense in Building Automation, Portables

$
0
0

The low power consumption, cost-effectiveness, and smaller footprints offered by Hall-effect sensors make them strong candidates for a wide range of designs and applications.

Download this article in PDF format.

The main trends in today’s building automation and personal electronics market include miniaturization and portability, along with increasing intelligence and lower costs. Achieving these trends, however, entails both challenges and tradeoffs. One of the most promising technologies for this market is Hall-effect sensing.

The Hall-effect principle is a popular magnetic sensing technology invented by Edwin Hall in 1879. The principle is simply a differential voltage generated in a current-carrying conductor, usually called a Hall element, when applying a perpendicular magnetic field to the conductor. The voltage is a result of a Lorentz force caused by the applied magnetic field, which causes the current electrons to concentrate in one end of the conductor and generate a potential difference between the two ends.

1. The voltage is usually referred to as a Hall voltage (VH).

Figure 1 illustrates the Hall-effect principle and the relationship between the current, voltage, and magnetic field. The generated voltage is usually referred to as a Hall voltage (VH). When a magnetic field isn’t present, the distribution of current electrons is uniform and VH is zero. When a magnetic field is applied, the distribution of current electrons is disturbed and will result in a non-zero VH proportional to the cross product of the current and the applied magnetic field. The current is usually fixed, resulting in a direct relationship between VH and the applied magnetic field.

VH has a very small value (on the order of few microvolts per volt of the supply voltage used to generate the Hall current per millitesla of the applied magnetic field (µV/Vs/mT). To make this voltage value usable, an amplification stage amplifies the differential voltage and rejects any common-mode signal, which is why you need a differential amplifier for this stage.

Figure 2 presents the typical components of a Hall-effect sensor. Different Hall-effect sensors use the output of the differential amplifier in numerous ways to achieve various functions. For more details about Hall-effect sensors and other magnetic sensor technologies, check out Texas Instruments’ (TI) white paper on the subject.

2. Basic Hall-effect sensor building blocks include a power source, the Hall element, and an op amp.

Hall-Effect Sensing Benefits

Hall-effect sensors have proved to be one of the most popular magnetic field sensors for several reasons:

  • Low cost
  • Low power consumption
  • Small footprint
  • Reliability
  • Simplicity
  • Distance sensing
  • Versatility (switch, latch, linear and multiple magnetic thresholds)
  • True solid state
  • Long lifetimes
  • Ability to operate with stationary input (zero speed)
  • Lack of moving parts
  • Logic-compatible input and output
  • Broad temperature range (−40 to +150°C)

These features translate into various electrical and magnetic specifications in a particular device data sheet. You can learn more about these specifications and how to interpret them in TI’s “Understanding & Applying Hall Effect Sensor Datasheets” application note. Also, TI has a TechNote on the benefits of low power consumption.

Types of Hall-Effect Sensors

Hall-effect sensors are configurable for different applications. They can generally be classified into two categories: threshold-based and linear.

  • Threshold-based Hall-effect sensors can act as a switch that responds to either magnetic poles with a single output (omnipolar), two outputs (dual unipolar), latched output (bipolar), or one pole (unipolar). Typically, if the field is originated from a south pole, the output voltage will be positive. If the field source is a north pole, the output voltage will be negative. TI recently developed a family of low-power Hall-effect digital switches (the DRV5032) and latches (the DRV5012) for digital Hall-effect sensors. Figure 3 shows the transfer function of these sensors.
  • Linear Hall-effect sensors are sometimes called ratiometric Hall-effect sensors, because the output is a proportional signal that varies linearly with respect to the applied magnetic field; the supply voltage determines the output range. To avoid using multiple supplies to bias the differential amplifier stage, an applied offset (null voltage) shifts the output to the positive range. Also, because a Hall-effect sensor saturates near the maximum and minimum values, the linear region is usually defined by few hundred millivolts above ground and a few hundred millivolts below the supply voltage.

3. Threshold-based Hall-effect sensors can have different transfer functions.

Linear Hall-effect sensors can respond to either magnetic poles or one pole to increase device sensitivity. Also, the output can be either unmodulated (a linear analog voltage signal) or modulated (a pulse-width-modulation [PWM] signal with a varying duty cycle) to enable use in noisy environments. Figure 4 shows the transfer functions of various linear Hall-effect sensors.

4. Sensor output can follow the linear (ratiometric) Hall-effect sensor transfer function (top) or be used to generate a PWM signal (bottom).

Applications

Hall-effect sensors possess many features that make them attractive for the building automation and personal electronics markets. What follows are several use cases where these sensors enable different functions for various end equipment.

Building automation

5. Building-automation use cases for Hall-effect sensors include rotary encoding (a), tampering/mount detection (b), and self-test (c).

  • Rotary encoding: Latched-type (such as TI’s DRV5012) or switched-type unipolar Hall-effect sensors detect the number of rotations of small motors in order to determine the position of electric blinds/shades or smart door locks (Fig. 5a). (See TI’s “Incremental Rotary Encoder Design Considerations” TechNote.)
  • Open/close detection: For reliable, highly connected, and portable Internet of Things devices (e.g., battery-powered wireless windows and door security sensors), switched-type Hall-effect sensors are preferable to reed switches because of their low power consumption, smaller footprint, and higher reliability. Plus, they have no moving parts and are less prone to environmental conditions. (See TI’s “Low-Power Door and Window Sensor with Sub-1 GHz and 10-Year Coin Cell Battery Life Reference Design.”)
  • Tamper detection: Switch-type Hall-effect sensors can detect the presence or absence of portable security devices from their mounts. Examples include battery-powered doorbells or wireless commercial smoke detectors (Fig. 5b).
  • Self-test activation or 1-bit wireless local communication: It can be difficult to physically access certain devices that require regular maintenance to trigger self-testing/diagnostic modes. Placing a switched-type Hall effect sensor inside the device of interest and using a magnetic wand to trigger a self-test or diagnostic mode eases this process (Fig. 5c).

Personal electronics

  • Position detection. Switch-type Hall effect sensors can optimize and enable a number of functions in personal electronics devices, including wireless charging alignment for phones and tablets; cover detection for phones and tablets; lid/screen position detection for notebooks; and accessory detection for tablets.

Figure 6 shows a couple of examples of these use cases using threshold-based Hall-effect switches. Linear Hall-effect sensors can precisely detect button positions for gaming and virtual-reality controllers, and screen angle for 360-deg. rotating notebooks.

6. Personal electronics use cases: Hall-effect switches for position detection (a); linear sensor in gaming controller and 360 laptops (b).

Summary

The benefits of Hall-effect sensors, such as low power consumption, cost-effectiveness, and smaller footprints (just to mention a few), position them well within building automation and personal electronics devices. Their versatility enables a variety of use cases, from rotary encoding in smart locks and shades to position detection in phones, tablets, and notebooks.


Electronic Design's Power Products of the Week (1/21-1/27)

$
0
0

Having trouble tracking down three-phase MOSFET driver ICs or hybrid DC/DC controllers for your latest design? Our editors have pulled together the power semiconductors, electronics, and other products that stuck out over the last week. Our list could have just what you need.

Enhanced Circuit Yields Versatile, Efficient Switch-Mode Solenoid/Relay Driver (.PDF Download)

$
0
0

Several years ago, Paul Rako, in collaboration with the late, great Bob Pease, wrote a terrific article about the finer theoretical and practical points of electromagnetic solenoid-driver circuits.1 It noted that most solenoids need less power to sustain actuation after the plunger is pulled in than is needed to actuate them in the first place. Important power saving and heat reduction without loss of mechanical performance are therefore possible if drive voltage starts out high for pull-in, then backs off to a lower value for hold.

Bob’s method for power reduction (see the first figure of Paul’s discussion) was elegantly simple, consisting merely of a resistor and capacitor connected in parallel with each other and in series with the solenoid coil. Transient pull-in power is supplied by the capacitor, while steady-state hold current flows through the resistor, chosen to be 60% to 70% of the resistance of the coil. Hold current is therefore reduced by ~40% and coil heat production is cut by more than 60%. That’s a pretty impressive reduction in solenoid-coil power dissipation. However, that series resistor consumed power, too: 70% as much as the solenoid.

1. This switch-mode solenoid-drive circuit maximizes hold-mode power savings by using comparators to efficiently manage drive waveforms.

The circuit presented here (Fig. 1) takes the same power-reduction principle to the next logical level by eliminating the series voltage-dropping resistor and replacing it with efficient switch-mode. (Note that the design applies to driving relay coils and contactors.)

Enhanced Circuit Yields Versatile, Efficient Switch-Mode Solenoid/Relay Driver

$
0
0

In this Idea for Design, using a comparator-based switch-mode driver circuit of a solenoid improves efficiency by tailoring the drive to the pull-in versus hold-mode current requirements.

Download this article in PDF format.

Several years ago, Paul Rako, in collaboration with the late, great Bob Pease, wrote a terrific article about the finer theoretical and practical points of electromagnetic solenoid-driver circuits.1 It noted that most solenoids need less power to sustain actuation after the plunger is pulled in than is needed to actuate them in the first place. Important power saving and heat reduction without loss of mechanical performance are therefore possible if drive voltage starts out high for pull-in, then backs off to a lower value for hold.

Bob’s method for power reduction (see the first figure of Paul’s discussion) was elegantly simple, consisting merely of a resistor and capacitor connected in parallel with each other and in series with the solenoid coil. Transient pull-in power is supplied by the capacitor, while steady-state hold current flows through the resistor, chosen to be 60% to 70% of the resistance of the coil. Hold current is therefore reduced by ~40% and coil heat production is cut by more than 60%. That’s a pretty impressive reduction in solenoid-coil power dissipation. However, that series resistor consumed power, too: 70% as much as the solenoid.

1. This switch-mode solenoid-drive circuit maximizes hold-mode power savings by using comparators to efficiently manage drive waveforms.

The circuit presented here (Fig. 1) takes the same power-reduction principle to the next logical level by eliminating the series voltage-dropping resistor and replacing it with efficient switch-mode. (Note that the design applies to driving relay coils and contactors.)

Driver topology is organized around the four analog comparators that comprise an old “friend,” the LM339 quad IC (A1 through A4) that combine to control a power FET (Q1) in response to the logic-level ENABLE input. Solenoid actuation begins when ENABLE activates A1, turning on Q1 and A2, starting the drive cycle (Fig. 2).

2. The drive-sequence timing controls the solenoid pull-in and subsequent hold operation to conserve power.

The V1 timing ramp generated by A3 rises at a rate determined by C1 (pull-in time Tpull-in = 5 × 105× C1 = 50 ms for C1 = 0.1 µF) and the associated resistor network, generating the initial full-voltage (V+) PULL-IN pulse V3 at Q1’s gate. This continues until V1 arrives at the V2 threshold of drive modulator A2 (set by R1 and R2), initiating reduced-power HOLD mode. Modulation of V3 and Q1 conduction is driven by the A4 oscillator, which starts up when V1 rises to A4’s threshold, and imposing a triangular ripple in V1. This cyclically switches A2, thereby establishing a ~70% duty factor on Q1’s conduction. Subsequently, it reproduces the hold-mode solenoid power-dissipation savings described in the Pease/Rako article, while avoiding the inefficiency of a voltage-dropping resistor.

The versatility of the resulting driver is enhanced by its ability to accept a power source from 12 to 24 V, and to accommodate solenoid current demand up to 10 A (i.e., up to 240 W), so that this single circuit can serve in a wide variety of solenoid-drive applications. Hold-mode power consumption is reduced by 60%, and overall efficiency easily exceeds 90%. Reset of the C1 timing capacitor takes less than a millisecond.

Steve Woodwardhas authored over 50 analog-centric circuit designs. A self-proclaimed "certified, card-carrying analog dinosaur," he is a freelance consultant on instrumentation, sensors, and metrology freelance to organizations such as Agilent Technologies, the Jet Propulsion Laboratory, the Woods Hole Oceanographic Institute, Catalyst Semiconductor, Oak Crest Science Institute, and several international universities. With seven patents to his credit, he has written more than 200 professional articles, and has also served as a member of technical staff at the University of North Carolina.

Reference

What’s All This Solenoid Driver Stuff, Anyhow?”, Electronic Design, August 5, 2013.

What’s All This Prototype PCB Stuff, Anyhow? (.PDF Download)

$
0
0

Back in 2010, Bob Pease sent me a couple pictures of some prototype printed circuit boards (PCBs) he had made in the early 1960s. Pease got to be a great analog engineer because he would hack away at circuits, working with his hands and real parts, instead of computer simulations. Pease wrote:

“The first photo (Fig. 1) looks like a typical P45-type discrete-component Philbrick op-amp from the 1963-1970 era. We had some excellent performance, in small packages. If you didn't have a suspicious eye, you'd say, OK, big deal, it's a fair layout.

1. Bob Pease made this prototype of the Philbrick P45 op amp back in the 1960s. The top-side of the board looks like a production PCB.

“The reverse view in the second photo (Fig. 2) shows how we did it, a classical pseudo-PCB layout. There are many small foils, and several long buses. That's how we liked to do it. We did this to represent that this was a feasible circuit, and we could plug it in and evaluate the circuit in a user's system. Yet from the top, it looks just like a real PCB layout. If we were in a big hurry, we could go forward from the pseudo-PCB layout to a real PC board in a couple days. / rap”

2. The bottom of the Pease prototype shows how the leads connect to serve as traces. It’s possible to change and rework components, but not easy.

I love that link to the P45 op amp, since it shows the actual production board. You can see the similarities to Pease’s prototype. Pease was known for his “airball” prototypes; he soldered together leaded components in the air. It was OK for low-frequency circuits.

An “airball” is not very conducive to putting things into production. I always prefer to get a real circuit board as soon as possible. Even if it just brings power to the chips, it mounts them firmly, and you can cut and jump and hack on them much like an “airball” prototype. You can also “dead bug” on a PCB, where you glue a chip with its leads pointing upwards and then connect to the leads with wires and discrete components. So-called “Manhattan-style” dead-bugging even lets you prototype RF circuits.

What’s All This Prototype PCB Stuff, Anyhow?

$
0
0

Airballs? Dead bugging? Pegboard? Toner transfer? Paul Rako runs through the evolution of prototyping boards, from the days of Bob Pease to approaches he finds most effective.

Download this article in PDF format.

Back in 2010, Bob Pease sent me a couple pictures of some prototype printed circuit boards (PCBs) he had made in the early 1960s. Pease got to be a great analog engineer because he would hack away at circuits, working with his hands and real parts, instead of computer simulations. Pease wrote:

“The first photo (Fig. 1) looks like a typical P45-type discrete-component Philbrick op-amp from the 1963-1970 era. We had some excellent performance, in small packages. If you didn't have a suspicious eye, you'd say, OK, big deal, it's a fair layout.

1. Bob Pease made this prototype of the Philbrick P45 op amp back in the 1960s. The top-side of the board looks like a production PCB.

“The reverse view in the second photo (Fig. 2) shows how we did it, a classical pseudo-PCB layout. There are many small foils, and several long buses. That's how we liked to do it. We did this to represent that this was a feasible circuit, and we could plug it in and evaluate the circuit in a user's system. Yet from the top, it looks just like a real PCB layout. If we were in a big hurry, we could go forward from the pseudo-PCB layout to a real PC board in a couple days. / rap”

2. The bottom of the Pease prototype shows how the leads connect to serve as traces. It’s possible to change and rework components, but not easy.

I love that link to the P45 op amp, since it shows the actual production board. You can see the similarities to Pease’s prototype. Pease was known for his “airball” prototypes; he soldered together leaded components in the air. It was OK for low-frequency circuits.

An “airball” is not very conducive to putting things into production. I always prefer to get a real circuit board as soon as possible. Even if it just brings power to the chips, it mounts them firmly, and you can cut and jump and hack on them much like an “airball” prototype. You can also “dead bug” on a PCB, where you glue a chip with its leads pointing upwards and then connect to the leads with wires and discrete components. So-called “Manhattan-style” dead-bugging even lets you prototype RF circuits.

3. Prototypes for a Harley generator voltage regulator use pegboard (top) and home-etched (bottom) prototyping methods.

Looking in my drawer of old prototypes, I found two early attempts at a generator-mounted voltage regulator for my Harley Sportsters (Fig. 3). I started out using plated “pegboard” PCB material. It has holes on tenth-inch centers, with a solder pad on both sides. The holes are plated-though like a production PCB. It worked pretty well in the days of leaded components and DIP (dual-inline plastic) ICs with tenth-inch centers. I definitely prefer it to original Vectorbord®, which has the holes but no solder pads. Vector also sells the plated-through boards, as well as boards with pads on one or both sides but with unplated holes.

DIY PCBs

After I grew weary of trying to connect all of the leads on a pegboard, I started to etch my own PCBs. First, you buy copper-clad material that has photoresist on one or both sides. Next design the layout backwards, where the lines are creating copper islands, similar to what a PCB milling machine must do. I used AutoCAD for this. Then print out the layout on clear transparency film. You use that to mask off the PCB and expose it to light. Then drop the PCB into a dish of ferric-chloride etchant, preferably heated on a hot-plate.

You can drill a couple of holes in the pre-sensitized PCB material to serve as registration marks for the transparency film. That way, you can make two-sided PCBs at home. Unfortunately the holes won’t be plated through. You soon learn to never put vias under a component. That makes it tough to solder a jumper wire in the via hole.

4. The top-sides of the prototype Harley voltage regulators mount components like production PCBs. Both methods allow the hacking in of changes and new components.

Like the Pease prototypes, these methods look similar to a production PCB from the top (Fig. 4). What these methods give you is the same component placement as a production PCB. That helps you check clearances, especially upwards clearance. It’s easy to forget about the Z-axis when working a 2D CAD (computer-aided design) program.

The pegboard prototype could make compact circuits back in the through-hole component days (Fig. 5). It also could serve as a way to mount a little prototype on one of those DIP-style prototyping component carriers. This allows you to make a whole series of circuits that are in easily replaceable modules. Those can go in a wirewrap motherboard. You can also use a larger sheet of pegboard to mount sockets that take the modules.

5. Pegboard construction allowed tight circuit layout back in the through-hole component days (top). You can solder a pegboard prototype into a component carrier to make a little module (bottom).

Besides pre-sensitized photoresist copper-clad, you can use toner-transfer methods to put traces on plain copper-clad PCB material. Your printer toner becomes the PCB traces when you print it on special toner-transfer film. You heat the film as you press it to the copper-clad. The toner then transfers off the film and onto the PCB. It is slick, but shares the problem of no plated-through holes. Still, one nice thing about both home-brew methods is that you can make boards with large copper areas that serve as ground planes (Fig. 6).

6. Home-etched PCBs can have large ground planes on one side. These switching power-supply prototypes have better performance and lower noise than airball prototypes. They also let you evaluate creepage and clearance distances before going into production.

Dealing with Those Surface-Mounts

Modern components are surface-mount. That can make them hard to prototype. One solution is to use adapter boards and demo boards (Fig. 7). Adapter boards mount tiny surface-mount components and bring them out to larger pin spacing, often the same tenth-inch spacing of pegboards and traditional through-hole PCBs.

7. Adapter boards and demo boards let you use surface-mount components. Schmartboard makes PCBs with extra-thick soldermask (top). This makes it easy to solder in tiny surface mount components without a microscope or fancy soldering iron. Alan Martin at Texas Instruments made up adapter boards to give DIP spacing to tiny components (middle). The folks at Circuits@Home made a Jim Williams design into a PCB (bottom). You can use it standalone or solder it into a bigger breadboard.

A demo board can serve as a great circuit element that you dead-bug or hack into your prototype PCB. Some demo boards are designed to plug into standard systems, such as Arduino Shield PCBs. I have engineer pals who design their own prototype PCBs, which they can use to quickly hack up trial circuits (Fig. 8).

8. Jon Dutra, of Linear Technology, National Semiconductor, Maxim, Microsoft, and now Apple, made this SOIC prototype board. It allows you to solder down an SOIC chip, and then connect the chips to power rails and air-wire in components. You can even use surface-mount components, given a little planning.

One prototyping method I can’t endorse is the old jumper wire breadboards. I used them when I started out (Fig. 9), and consider them a “hobbyist” method unsuited to engineering prototypes. You soon learn that if you stuff a larger leaded component, like a 5-W resistor, into the hole, it stretches out the terminals inside. That location on your jumper board will never make a good connection to a regular-sized lead again. If you run too much current through them, it burns the terminals and melts the plastic. More ruined locations. The spread-out layout with unconnected stubs on every node means you can’t use them for anything over a few megahertz. You sure can’t do RF prototypes on them. I sold all of mine at the flea market. It’s bad enough to have problems coming from your design concepts—you don’t need them in your component connections.

9. Jumper-wire breadboards are tempting, but they often cause problems. It worked OK for this old battery-powered, transformer-coupled guitar amp, but jumper-wire prototypes are often more trouble than they are worth.

What About the Soldermask or Silkscreen?

One problem all of these methods share is that there is no silkscreen or soldermask. The silkscreen helps you build and troubleshoot your prototypes. The soldermask prevents solder-bridging and gives the same high-frequency impedance as your production boards.

There are many inexpensive PCB prototyping shops. I used Proto-Express, since they were in Silicon Valley where I lived. I’ve also used Advanced Circuits in Colorado, and Sunstone in Oregon. I have had friends very pleased with PCB-Pool out of Ireland. I also heard great things about PCBCart in China and OSH Park. With all of the competition in prototype fab houses, it can be very economical to get a real circuit board for your design.

In addition, full-featured PCB CAD programs are getting cheaper and cheaper. The Altium-based CircuitStudio from Newark is around 695 bucks for a competitive upgrade. It’s $995 to buy outright, which I did earlier this year. While you need to know how to solder up a quick prototype like in the pictures, it’s also important to be able to whip out a real PCB design. You can get two-sided boards in a day, and having cheap 4-layer boards available means you can do some really complex high-performance designs. Best yet, once you have a real prototype PCB, it’s that much easier to get the design into production.

Discrete Analog ICs Literally Become Nearly Invisible

$
0
0

TI introduced an op amp and family of comparators in its new X2SON packages measuring just 0.8 × 0.8 mm—and there's no compromise in performance.

Despite the widespread use of highly integrated analog and mixed-signal ICs, or perhaps as a consequence of them, there’s still a large need for single-function analog components. Sometimes they are the remedy for unique signal-processing requirements; other times, it’s to fix an oversight in the signal path (yes, it happens). That’s why thousands of discrete op amps, comparators, and similar components are available from dozens of sources.

But the push for smaller ICs is inescapable, even for these analog components. Texas Instruments just announced its first entrants—the TLV9061 op amp and TLV7011 family of comparators—to be housed in the company’s X2SON package, which is claimed as the industry’s smallest. These packages measure 0.8 × 0.8 mm, resulting in a nearly unseeable footprint of just 0.64 mm2.

Now you see them, and now maybe you don’t: Both a comparator family and op amp from Texas Instruments require just a 0.64-mm2 footprint due to their unique X2SON package.

The TLV9061 op amp features a gain bandwidth (GBW) of 10 MHz, slew rate of 6.5 V/µs, and spectral density of 10 nV/√Hz, plus rail-to-rail inputs and output performance. In addition, it incorporates EMI filtering on the inputs to deliver robust performance in systems facing RF noise, while reducing the need for external discrete circuitry. The device is fully specified, including input bias current and offset drift, over the −40 to +125°C. Prices begin at US $0.19 in 1,000-unit quantities; more information is available here.

The TLV7011 family of nanopower comparators, presently consisting of four devices, consumes 50% less power than competitive comparators, according to TI. Two of the comparators operate from 1.6 V to 5.5 V, with dc offset of 0.5 mV, 260-ns propagation delay, and 5-µA supply current; corresponding specifications for the other two units are 1.6- to 6.5-V supply, 0.1-mV offset, 3-µs propagation delay, and 335-nA current. All of the comparators guarantee no phase reversal and include internal hysteresis for overdriven inputs to increase design flexibility and reduce the need for external components. Pricing begins at US $0.25 in 1,000-unit quantities; click here for the datasheet.

For design and evaluation, TI offers Spice models of both the op amp and comparator family at the TINA-TI SPICE model site. To breadboard designs using these tiny comparators, the company also offers a DIP adapter evaluation module ($5); just remember that you’ll have to take the associated DIP-module parasitics into your model, of course. Also, you’d better be sure that your production facility has pick-and-place systems that can handle and precisely place these tiny packages.

Free Design Rule Checker Focuses on PCB Designs

$
0
0

Siemens Mentor group now offers a free version of its HyperLynx design-rule-checking software for checking printed circuit boards using the latest high-speed serial interfaces.

Mentor, a Siemens business, is into a wide range of software, from real-time operating systems (RTOSs) to chip-design tools. Also in the mix is printed-circuit-board (PCB) design software.

Quite a bit has changed in the PCB design arena over time, and Mentor’s PADS PCB design tool and Xpedition Enterprise multi-board design tool have been on the forefront. However, these days, PCB design requires a lot more than just layout tools because of the increased use of wireless and high-speed serial interfaces, which have more demanding PCB requirements.

The HyperLynx software helps PCB engineers efficiently analyze, solve, and verify critical PCB design requirements so that boards work as intended. HyperLynx includes the HyperLynx SI signal-integrity analysis tool, HyperLynx PI power-integrity analysis tool, and the HyperLynx DRC design-rule-checking software. HyperLynx SI and HyperLynx PI incorporate trace modelers, field solvers and simulation engines. 

1. HyperLynx DRC rules allow for automatic checking of PCB designs.

Reworking a PCB is usually less expensive than redoing a chip, but the costs can still be significant. Likewise, problems may not arise until a device has been in the field, so it pays to make sure things are designed properly from the start. The aforementioned tools aren’t inexpensive, but they can easily pay for themselves in reducing the time to market as well as allowing designers to confidently deliver working designs for production.

Now, however, a free version of HyperLynx DRC is available (Fig. 1). It’s a handy way to try out the software; many can take advantage of the tool and perhaps discover they may not need more advanced services or other tools.

The new free version of HyperLynx DRC is able to perform impedance and topology checks as well as differential impedance, pair checking, and pair phase matching. It can handle decoupling capacitor placement, metal islands, and net crossing gaps. The tool maintains eight core design rules and supports PCB design flows from Mentor, Zuken, Cadence, and Altium in addition to ODB++and IPC2581 standards.

The low-cost “gold” edition has a 22-rule set with a large number of additional features, including support for long nets, edge-rate checking, multiple vias, and different topologies such as fly-by and star. The higher-end developer edition provides access to advanced geometry engines. Users of the developer edition have the ability to write custom design rules. It also supports VBScript and JavaScript, and a built-in script debugging environment.

2. All versions of HyperLynx DRC, including the free option, use a graphical user interface in addition to providing results in spreadsheet formats.

The HyperLynx Fast 3D Solver is an accelerated, full-3D electromagnetic-quasi-static (EMQS) extractor that can handle power integrity, low-frequency SSN/SSO, and complete-system Spice model generation while accounting for skin-effect impact on resistance and inductance. This would be used for the latest system-in-package (SiP), package-on-package (PoP), stacked die, and multichip module (MCM) designs.

All versions of HyperLynx DRC share a common graphical environment (Fig. 2).


Matched JFETs Improve Photodiode Amplifier (.PDF Download)

$
0
0

Linear Technology has a good application note about using a discrete JFET to improve the ac response of a large-area photodiode. The circuit uses the JFET as a voltage follower on the cathode side of the photodiode (Fig. 1). This nulls the effect of the diode's internal resistance and capacitance. If there’s no voltage change across the diode, the resistance and capacitance have no effect on the circuit. And the photocurrent generated by the diode is unaffected by the bootstrapping action of the JFET.

1. This circuit uses a JFET to bootstrap the cathode of the photodiode. It eliminates the effect of diode resistance and capacitance. This improves bandwidth and reduces noise, but puts a dc voltage across the diode.

The bandwidth of the JFET is greater than the amplifier, so the bootstrapping action extends to the unity-gain bandwidth of the amplifier. The bootstrapping makes the photodiode's 3000-pF internal capacitance appear to be zero at the amplifier input. Removing this capacitance will eliminate the input pole, or lag, from the circuit and allow you to use a smaller compensation capacitor. This will extend the frequency response of the circuit (Fig. 2).

2. Spice plots demonstrate the improvement in circuit bandwidth offered by JFET bootstrapping. Without the bootstrapping, the circuit bandwidth is 16.6 kHz. With the bootstrapping JFET, the bandwidth increases to 383.7 kHz.

Matched JFETs Improve Photodiode Amplifier

$
0
0

Using a bootstrapping scheme with a JFET can enhance a photodiode’s ac response, but adds unwanted dc voltage. Remedy that problem with a matched dual-JFET.

Download this article in PDF format.

Linear Technology has a good application note about using a discrete JFET to improve the ac response of a large-area photodiode. The circuit uses the JFET as a voltage follower on the cathode side of the photodiode (Fig. 1). This nulls the effect of the diode's internal resistance and capacitance. If there’s no voltage change across the diode, the resistance and capacitance have no effect on the circuit. And the photocurrent generated by the diode is unaffected by the bootstrapping action of the JFET.

1. This circuit uses a JFET to bootstrap the cathode of the photodiode. It eliminates the effect of diode resistance and capacitance. This improves bandwidth and reduces noise, but puts a dc voltage across the diode.

The bandwidth of the JFET is greater than the amplifier, so the bootstrapping action extends to the unity-gain bandwidth of the amplifier. The bootstrapping makes the photodiode's 3000-pF internal capacitance appear to be zero at the amplifier input. Removing this capacitance will eliminate the input pole, or lag, from the circuit and allow you to use a smaller compensation capacitor. This will extend the frequency response of the circuit (Fig. 2).

2. Spice plots demonstrate the improvement in circuit bandwidth offered by JFET bootstrapping. Without the bootstrapping, the circuit bandwidth is 16.6 kHz. With the bootstrapping JFET, the bandwidth increases to 383.7 kHz.

Bootstrapping Benefits

The bootstrapping scheme also has beneficial effects on noise. The noise of a photodiode amplifier operated as a transimpedance amplifier (TIA) is a complex combination of diode noise, amplifier current noise, amplifier voltage noise, and the Johnson thermal noise of the feedback resistor. The amplifier's offset voltage and input equivalent voltage noise is multiplied by noise gain of the circuit and appears at the output. Analog Devices’ op-amp handbook demonstrates that the dc noise gain is 1 + RF/RIN, and the ac noise gain is 1 + CIN/CF. Since the bootstrapping makes the diode resistance look infinite, the dc noise gain is 1. That means the amplifier's offset voltage isn’t multiplied when it appears at the output.

The ac condition is almost as good. Without the JFET, the input capacitance is the diode capacitance plus the amplifier input capacitance plus any stray capacitance on your printed-circuit-board (PCB) layout. The 3000-pF diode capacitance dominates this, so that the ac noise gain is 1 + 3015/0.25 or 12,061. With the JFET, the circuit input capacitance drops to the sum of the JFET capacitance, the amplifier input capacitance, and the stray capacitance.

Now the ac noise gain is 1+25/0.25, or 101. The amplifier's equivalent input noise is multiplied by this ac noise gain. The improvements in the circuit's noise performance are obvious when the ac noise gain goes from 12,061 to 101. The 1-nV/√Hz contribution of the JFET adds RMS to the amplifier noise of 8 nV/√Hz, to give 8.2 nV/√Hz. The RMS addition means the much larger amplifier input equivalent noise is dominant.

3. Drawing the 4.99-kΩ load line on the transfer curves from the JFET datasheet shows the 0.35- to 0.9-V dc error that will appear across the diode with different JFETs.

Unfortunately, the JFET does put a dc voltage across the photodiode. You can infer this by drawing the load line of the JFET 4990-Ω resistance on the JFET transfer curves as published in its datasheet (Fig. 3). Since the plus pin of the op amp is at ground, the minus pin must also be at ground, if you ignore any offset voltage error. That means the JFET gate is at ground. So, when the gate-source voltage goes from −1 to 0 V, the voltage across resistor RBIAS goes from 6 to 5 V. Therefore, the current through RBIAS and the JFET goes from 1.2 mA to 1 mA.

Drawing that load line on Figure 7 of the BF862 datasheet shows a 0.35- to 0.9-V reverse bias across the diode that will create a dark current flow in the photodiode. This dc current will appear at the output as an error. Worse yet, this error will be different in every circuit as you use JFETs with minimum, maximum, and typical JFET transfer characteristics. The dc voltage and the error it creates will also change with temperature.

Make the Move to Matched

Kirkwood Rough of Upstairs Amps suggested using a matched dual-JFET. This removes the bulk of the dc voltage across the photodiode (Fig. 4). The lower JFET replaces the resistor with a current source, which improves the response of the voltage follower and makes it more accurate.

4. Using a dual-JFET improves the bootstrapping and reduces the dc voltage across the diode. The capacitance across the source resistor reduces the resistor noise contribution.

Scott Wurcer, a Fellow at Analog Devices, suggested that matching the source resistors of the JFETs ensures that the cathode of the photodiode will be very close to 0 V (Fig. 5). Using an amplifier with lower offset voltage also minimizes any dc across the diode. You can add a capacitor across the bootstrap JFET source resistor to reduce the noise contribution of this added resistor.

5. The dc offset across the photodiode is further reduced by matching the source resistors of the two JFETs.

The LSK489 dual-JFET has superior transconductance to the BF862. That, combined with the current-source load presented by the other JFET, improves the bootstrapping, especially at higher frequencies. The AD8610B amplifier has 2-nv/√Hz lower voltage noise, which will further reduce ac noise. It has one-seventh bias current, which will reduce the dc output error. It also has half the offset error of the LTC6244.

Nothing is for free, though—the AD8610B has half the bandwidth and will cost you at least a dollar more. Not only that, it’s a single whereas the LTC6244HV is a dual. For this circuit, the bandwidth should be adequate, but cost is always a concern.

6. By using potentiometers, you can tweak out any tiny offset voltage across the photodiode. This completely eliminates any dark-current error.

The proposed circuit will still have variable, albeit small, dc errors due to resistor tolerances. To bring the voltage at the cathode of the photodiode to within a few microvolts of ground, you can add adjustment potentiometers to the circuit (Fig. 6), another Kirkwood Rough suggestion. Wurcer suggested a low-offset op amp could servo the bias point to a few microvolts. Paul Grohe of Texas Instruments pointed out that even laboratory power supplies add a lot of noise to our circuits.

To get the optimum noise measurement, you should power these circuits with batteries. You can then use that as a baseline to evaluate the noise contribution of the power supplies in your production design.

The Front End: Dither? I’ll Give You Dither

$
0
0

The term dither means multiple things, from indecision to its application in improving resolution. So how does coffee play into it?

“Are they still dithering about?”

One of my technical articles seemed to have fallen into a black hole. The magazine that had agreed to publish it apparently couldn’t figure out where or when to place it.

But as soon as I wrote that line above to my hard-pressed editor (yes—hard though it may be to believe, I actually let a grown-up comment and even edit my work), I thought “Ah, now there’s a Front End piece just waiting to be written.”

Dither, you see, is a term that the practicing system designer will encounter quite frequently these days. And, whatever we might conclude that it does mean, one thing it does not imply is the inability of a system or circuit to make a single, particular decision.

Both statistical economics and game theory distinguish between two transactional modes: the single encounter and the repeated encounter. The expected value of a business transaction equals the actual value of the transaction if it occurs, times the probability that it will occur.

The same applies to lottery tickets. You might be able to win a million pounds by purchasing a one pound ticket in the UK’s National Lottery, but the expected value of that transaction, allowing for the cost of your ticket, is about −0.6 of a pound, i.e. a loss. We all have some level of intuitive understanding of this, though the prevalence of lottery participants indicates some natural variability.

So your economic strategy, whether as a lottarian (I made that up, I admit it) or a businessperson, depends on whether you are making one bet, or executing a large number of transactions whose average value is more important than the results of one particular “move.”

Hither Dither

Our everyday interpretation of dithering involves failure to decide on a one-off action. Even when we say of a person that “that one’s a ditherer” because they often exhibit this behavior, we’re focusing on uncorrelated single events and not on the aggregate “value” of their actions.

But in the world of the system designer, dither is in fact a rather splendid way of ensuring that—on average—the quality or outcome of your decisions is a close approximation to the “right value,” or desired outcome. That’s because dither inherently works its magic on a sequence of events. To get the true value out of dither, you must keep dithering! To dither a system is to give it, over time, a richer “search volume” in which it can find the ‘answers’ to the ‘decisions’ that need to be made.

Let’s look at a really simple story example. In your organization, the HR and catering departments are at loggerheads with each other. HR wants you to have free coffee, but the careering department doesn’t want its budget to get hit, so they want you to pay for it in the lovely new canteen.

So, after many meetings, they come to a compromise. This compromise was strongly influenced by the catering department, staffed as it is with wannabe economists with a spare-time game theory obsession. The compromise is this: HR will give you a token, valid for one day, for $1, to buy coffee. And the catering department will set the price of the coffee at $2. Sounds like it might be hard for you to get your caffeine fix, seeing as you have no other source of funds…

Then I come along, and offer you a deal. I will randomly give you $1, or take $1 from you, with a probability P=0.5 for each. So the net monetary value of that deal in the long term—a repeated encounter, in the game theory sense—is zero. But look what it has enabled. Now, with a probability of P=0.5, you have enough money for a coffee that day. You’re able to enjoy, on average, half a cup of coffee per day. Which is exactly what $1 per day would buy you if it wasn’t for this pesky quantization of coffee states. My zero net value deal has enabled you to get your coffee—it has dithered your coffee consumption.

Now some of you may say “yes, but that’s just obvious, a child could see that this is how to solve the problem.” And I think that’s part of the point that I’d like to make. Dither isn’t some arcane technical concept; it’s something else that we seem to have a strong intuitive understanding of.

Quantizing Coffee

I mentioned quantization, and of course this is where as a system designer you’re most likely to encounter the use of dither. The obstacle to getting your coffee—the quantization of coffee delivery in $2 steps—is akin to the stepwise transfer function of a signal quantizer, which turns a continuous signal (usually, but not always, a physical quantity such as a voltage), into a countable, discrete quantity. That’s usually—but not always—a digital signal. It’s no coincidence that the word “digital” derives from the Latin for finger, and it was upon fingers that Man used to count, before we invented Excel. And the lineage of the word “quantization” can be traced back to the Latin for “how many?”

Dither, then, is just the zero net value deal that your system offers to a quantizer so that the mean value of the results you get from it are a better reflection of the variability of the underlying signal. The application of dither doesn’t improve the “accuracy” of a single conversion result. But when properly applied, it can significantly improve the resolution of a system in the long term. Because in the long term, all the noise to do with quantization can be averaged away, to reveal how good the system is at capturing very small changes in the underlying process being monitored.

And what of that technical article I mentioned?  Well, as I write, they do still seem to be dithering. I just hope they aren’t getting jittery about publishing my work. Wait, jitter…

The Multi-Switch Detection Interface: A Cure for Many BCM Ailments (.PDF Download)

$
0
0

An automotive body control module (BCM) is an electronic control unit (ECU) that manages a wide range of vehicle comfort, convenience, and lighting functions, including the central locking system, power windows, chimes, closure sensors, interior and exterior lighting, wipers, and turn signals.

Figure 1 shows the functional blocks in a BCM. The primary function of the BCM is to monitor the status of discrete switches and analog sensors related to these functions and control high- and low-side switches, relays, and LED drivers. The BCM also exchanges information with other modules over a vehicle network: the Controller Area Bus (CAN) or Local Interface Network (LIN) protocols are both widely used.

1. A BCM manages a wide range of comfort, convenience, and lighting functions. (Source: TI blog: “The multi-switch detection interface: integrated feature for smaller, more efficient designs,” Fig. 1)

The number of switches and sensors varies from one vehicle to the other, but it can be 100 or more in a high-end implementation. There are two common types of switches in a vehicle (Fig. 2). A digital switch (Fig. 2a) only has two states, open and closed. Examples include the seat-belt engaged switch, rear-window defroster on/off, front and rear fog push button, the trunk switch, and the door-locking switch. The digital switch most commonly switches to ground, but switching to the battery voltage is also possible.

The Multi-Switch Detection Interface: A Cure for Many BCM Ailments

$
0
0

Sponsored by: Texas Instruments. An MSDI device bundles many standard interface requirements in a compact package, giving designers more flexibility when integrating a body control module in ever-more-complex automotive designs.

Download this article in PDF format.

An automotive body control module (BCM) is an electronic control unit (ECU) that manages a wide range of vehicle comfort, convenience, and lighting functions, including the central locking system, power windows, chimes, closure sensors, interior and exterior lighting, wipers, and turn signals.

Figure 1 shows the functional blocks in a BCM. The primary function of the BCM is to monitor the status of discrete switches and analog sensors related to these functions and control high- and low-side switches, relays, and LED drivers. The BCM also exchanges information with other modules over a vehicle network: the Controller Area Bus (CAN) or Local Interface Network (LIN) protocols are both widely used.

 Sponsored Resources: 

1. A BCM manages a wide range of comfort, convenience, and lighting functions. (Source: TI blog: “The multi-switch detection interface: integrated feature for smaller, more efficient designs,” Fig. 1)

The number of switches and sensors varies from one vehicle to the other, but it can be 100 or more in a high-end implementation. There are two common types of switches in a vehicle (Fig. 2). A digital switch (Fig. 2a) only has two states, open and closed. Examples include the seat-belt engaged switch, rear-window defroster on/off, front and rear fog push button, the trunk switch, and the door-locking switch. The digital switch most commonly switches to ground, but switching to the battery voltage is also possible.

2. Digital (a) and resistive (b) switches are two common options in BCM applications. (Source: TI training: “Challenges in today’s Body Control Module (BCM) design”)

A resistive switch (Fig. 2b) has multiple states or positions. Each state connects a different resistor value to ground when selected. Examples include the ignition key switch, the taillight and headlight control switch, as well as the wiper control switch.

A microcontroller typically monitors the status of the various switches. Each type requires a different interface, but most MCUs feature general-purpose input/output ports (GPIOs) that can be configured to perform multiple functions. A digital switch can be sampled simply by a comparator, since it only has two states. An analog switch outputs a different voltage for each switch position, so the GPIO must connect to an analog-to-digital converter (ADC) to sample the value and determine the selected value.

Connecting a mechanical switch to an MCU involves a lot more than hooking up a wire. Two issues pop up: The low-voltage CMOS MCU input must be protected from external transients; and the switch contacts must be provided with a minimum wetting current that establishes a voltage to be measured, and prevents premature failure due to oxidation at the contacts.

Let’s look at a simple switch-to-ground interface (Fig. 3). The traditional method of connecting an external ground-referenced switch uses numerous discrete components. For protection, capacitor C2 shunts ESD and transient energy, and diode D1 blocks high voltages. A GPIO output enables and disables the wetting circuit. A logic “high” output from the GPIO turns on the wetting current with the help of transistors Q1 and Q2; resistors R1, R2, and R3; and capacitor C1. Resistor R4 sets the wetting current at the switch. The voltage at the junction of R4 and R8 establishes the value at the GPIO input when the switch is open.

3. The interface for a mechanical switch requires multiple discrete components to protect the microcontroller and provide wetting current for the contacts. (Source: TI blog: “The multi-switch detection interface: integrated feature for smaller, more efficient designs,” Fig. 2)

In summary, each switch channel requires as many as five resistors, two capacitors, one diode, two FETs, and one unique GPIO connection.

Problems with the Discrete Approach

A discrete implementation such as shown in Fig. 3 poses several challenges for the designer:

High component count: The first issue is the high component count. For example, a BCM with 24 switch inputs requires a total of 78 resistors, 27 capacitors, 24 diodes, and six FETs. A large component count increases the size of the BCM, and pushes up the BOM and manufacturing costs.

High GPIO count: The 24-switch BCM also requires a total of 28 GPIO—one for each switch, plus four more to control the FET timing. This forces the use of a high-pin-count microcontroller, leading to increased cost and more complex PCB routing.

High power consumption: For fast switch response time, the microcontroller either needs to be active all of the time or be woken up periodically to ensure continuous switch monitoring.

A microcontroller typically draws milliamps of current. This might not be a problem, perhaps, when the vehicle is running and the alternator is charging the battery. However, when the ignition is off, automobile manufacturers require electronic modules to consume minimum quiescent current to prolong battery life. Keeping the BCM microcontroller alive to monitor switches may not be an option.

Variation in wetting current: The wetting current flowing through the switch in Fig. 3 depends on the battery voltage VBATT and resistor R4. Even during normal operation, VBATT can vary by several volts from its nominal value of 14 V due to transient load changes, and drop down to 6 V or less during cranking. Abnormal events such as load dumps and jump starts can cause much larger variations. The wetting current is proportional to the battery voltage. If the battery voltage changes by 20%, say, so does the wetting current.

The wetting current can also change with variations in resistive loading on the switch, especially if it's an analog switch with different resistances for different switch positions. Wetting-current variations complicate the system design. The effects can range from inconvenience to a system malfunction, especially if the BCM reads the wrong switch status.

Large ESD capacitor: The discrete design requires a large capacitor on each input to provide system-level ESD protection and prevent damage to other components on the PCB. A large capacitor on the input increases the time needed for the GPIO voltage to settle. This causes delay in the switch response time and forces the microcontroller to stay active longer, increasing the system power consumption.

Sharing of switch status: In some designs, a switch must be monitored by multiple microcontrollers for safety and redundancy reasons. When the same switch is connected to multiple microcontrollers, a blocking diode is typically needed to ensure the current flows only in the desired direction. These diodes increase cost and consume board space.

Design portability and reuse: With the large number of vehicle options available to consumers, creating portable and reusable designs across different platforms is a highly desirable goal. Each switch option (resistive switch, switch to ground, or switch to battery) requires a different discrete design, making it more difficult to create a uniform reference platform for multiple applications. This prolongs the design cycle and consumes more engineering resources.

The MSDI Device Provides an Integrated Solution

A multiple switch detection interface (MSDI) is a convenient way to take care of the issues discussed above. The MSDI integrates the discrete components for multiple channels into a single device (Fig. 4). It provides a common interface for multiple analog or digital switches and communicates their status back to an MCU via an industry-standard serial peripheral interface (SPI).

4. An MSDI device combines the interface components for many switch inputs into a single device. (Source: TI training: “How MSDI helps solve system-level challenges in BCM design”)

MSDI devices have adjustable wetting currents capable of sinking and sourcing currents for both battery- and ground-connected external switch inputs. These currents are internally monitored and controlled, so they remain consistent over a wide range of battery input voltages. MSDI switch inputs can also handle automotive load dump and reverse-battery voltages without the need for discrete blocking diodes and wetting-current components, saving additional board area and cost.

The TIC12400-Q1 is an MSDI device designed for the 12-V automotive environment. The device, which operates independently of the MCU, features a comparator to monitor digital switches, plus a 10-bit analog-to-digital converter (ADC) to monitor multi-position analog switches. Programmable detection thresholds for the ADC and the comparator allow the TIC12400-Q1 to support various switch topologies and system configurations.

The TIC12400-Q1 monitors up to 24 direct switch inputs. Ten inputs can monitor switches connected to either ground or battery. Each input can use one of six selectable wetting-current settings to support different application scenarios.

The integrated 10-bit, 0- to 6-V ADC measures the voltages on any input not set in comparator input mode, including any input threshold that requires special programming or any multi-threshold input. In addition, the ADC can directly monitor external analog signals.

The device can monitor all switch inputs while the MCU is in sleep mode. When action is needed, it can wake up the MCU with an interrupt, thus reducing power consumption of the system. The TIC12400-Q1 also offers integrated fault detection, ESD protection, and diagnostic functions for improved system robustness.

The TIC12400-Q1 supports two operational modes: continuous and polling mode. In continuous mode, wetting current is supplied continuously. In polling mode, TIC12400-Q1 turns on the wetting current to sample the input periodically, under control of a programmable timer. Poling mode significantly reduces the system power consumption.

For automotive applications without resistive multi-position switches, Texas Instruments also offers the TIC10024-Q1. This device has an identical feature set to the TIC12400-Q1, but without the ADC.

5. The MSDI-based solution consumes approximately 66% less board space than a discrete design. (Source: TI blog: “Body control modules—invisible but fundamental for every car”)

Figure 5 shows how the MSDI solution can save space versus a comparable discrete design. The underlying board is a BCM with a discrete switch interface. On it, we’ve superimposed, at a 1:1 scale, a snippet from the Automotive MSDI reference design—an example implementation using the TIC12400-Q1 device and required external circuitry. Both designs provide wetting current, reverse-blocking diodes, and ESD capacitors. With a two-layer board, the MSDI reference design measures 17.5 by 18.8 mm, one-third the size of the discrete solution.

TIC12400 Application Example: Resistive Switch

Using an MSDI certainly simplifies the hardware design and layout of a BCM switch interface. However, designing for the automotive environment requires the designer to pay close attention to a number of variables and tolerance stackups to produce a robust design.

6. This application model is used to calculate allowable values for a three-position resistive switch. (Source: “TIC12400-Q1 24-Input Multiple Switch Detection Interface (MSDI)” PDF)

Let’s look at an example (Fig. 6) of the TIC12400-Q1 decoding the output of an analog resistor-coded switch with three states. When the switch changes state, the TIC12400-Q1 must correctly detect the state transition, store the information, and alert the MCU via an interrupt so that it can retrieve the data from the TIC12400-Q1’s status registers. The three states are:

State 1: Both switches open

State 2: SW1 open and SW2 closed

State 3: SW1 closed and SW2 open

Fig. 6 shows the switch specification. RDIRT represents the leakage across the switch in the open state; RSW1 and RSW2 are the two switch resistances; and R1 is the discrete resistor added in when switch 2 closes. The table shows the minimum and maximum values of these resistances. The TIC12400 sees an equivalent resistance RSW_EQU at the selected input pin INX.

Design example specifications. (Source: “TIC12400-Q1 24-Input Multiple Switch Detection Interface (MSDI)” PDF, p. 118)

To represent the automotive environment, we make the following assumptions: the battery voltage VBATT can vary between 9 and 16 V; and there is a potential ground shift of up ±1 V between the switch reference point and the ground reference of the TIC12400-Q1. The table shows the design specifications.

The design begins by calculating the equivalent resistance values at different switch states. After taking into account RDIRT and the 8% resistance tolerance of R1, each state yields a minimum, nominal, and maximum value for RSW_EQU.

When a switch closes, the current source in the TIC12400-Q1 sources a wetting current that flows through the switch. This current is nominally the set value IWETT, but is subject to variations over temperature and process, resulting in an actual current IWETT_ACT.

The nominal voltage at the INX pin is therefore:

VINX = RSW_EQU × IWETT_ACT

The ±1-V ground shift adds further uncertainty: The final measured value of VINX can be within a range of allowable values that all represent a valid switch configuration.

The three-position switch will have three sets of voltage bands (Fig. 7). These translate into three sets of valid ADC digital output codes, separated by code ranges that don’t correspond to a valid switch output. The designer can pick a code in the middle of each “invalid” range as the decision point for each switch position.

7. Each of the switch positions will result in a range of valid ADC codes separated by “no-go” bands. This example uses a nominal wetting current of 5 mA. (Source: “TIC12400-Q1 24-Input Multiple Switch Detection Interface (MSDI)” PDF, p. 120)

This example is covered in detail in the TIC12400 datasheet. There’s also a reference design, mentioned earlier—the MSDI reference design contains example implementations of a variety of high-voltage (HV) switch inputs using an MSDI device. The switch inputs satisfy a typical automotive requirement. They can withstand transients up to 40 V and reverse battery conditions down to –16 V.

The user guide gives examples of a TIC12400-Q1 or TIC10024-Q1 MSDI handling HV switch inputs for BCM, faceplate, and top-column-module (TCM) applications. In addition, the design utilizes a wide-VIN low-dropout (LDO) TPS7B6733-Q1 automotive regulator to create a fixed 3.3-V supply suitable for powering the BCM MCU.

Conclusion

As the number of comfort and convenience features in automobiles continues to increase, the BCM must interface with large numbers of switches in several different configurations, while simultaneously minimizing current consumption and keeping board small. A multi-switch detection interface (MSDI) device helps designers meet these requirements by bundling many standard interface requirements in a compact package.

 Sponsored Resources: 

Viewing all 582 articles
Browse latest View live