Volts or Amps – the crucial question.

National Instrument C-series DAQ cards, capable of recording voltage signals, current signals, or both.

Volts or Amps

In a past Transducer Thursday we looked at Galvanometers, in an upcoming article we’ll look at digital voltmeters. Both have been used to create read outs from sensor data. That therefore leads to a question, what do we want transducers to output? A current variant signal or a voltage variant signal?

In modern instrumentation, unless the transducer creates a voltage directly from a set process that we are measuring (such as thermocouples) they usually are arranged in such a way with inbuilt amplifiers to produce a signal that varies within a standardised range. Typically, this is either 4-20 milli amps, or 0-10 volts. But why? How did we get here? And why do we care?

In terms of measurement of the signal itself, it is essentially irrelevant whether or not we use current or voltage. Both are practical and easy to measure, either with analogue galvanometer or an analogue to digital converter. With a calibrated resistor, you can measure current using a voltmeter and voltage using a galvanometer. Any ADC today that measures current is using this technique and equally any galvanometer based voltmeter used the reverse. Precision is pretty much the same at the point of measurement. It therefore comes down to factors other than the accuracy or ease of measuring volts or amps.

National Instrument C9219 DAQ’s can measure voltage or current signals with ease.

If we can use either voltage or current, why those particular ranges? In both cases the low amperage or voltage means that risks are low, wires are unlikely to overheat and electric shocks will be basically impossible. Both systems can be driven by relatively simple and practical power supplies. Again, this doesn’t provide us a with a set reason to choose one over the other. But does demonstrate why these ranges as opposed to higher ones are the two most common.

So what are the advantages of either standard? In the case of current, particularly the standard 4-20 milli-amp range, it allows for an easy check that the sensor is behaving as predicted. 4 milli-amps is mapped to zero, and therefore if a reading shows 0 mA it is easy to see that an error has occurred and there is either a line break or the sensor has failed to function.

Natioanl Instrument C9203’s, specialist cards for 20mA signals.

On-top of this, current based systems are very resistant to noise. A lot of noise is caused by a voltage being generated from a changing EM field, either from mains electricity or other source. Across very low resistances, such as that along a copper wire, the induced current is minimal even in response to comparatively large induced voltages. Such a  variation in potential might be significant when dealing with high precision measurements for a 0-10v system.

Another advantage is that current is unaffected by distance, 13.25 milli-amps generated at the sensor will still be 13.25 at the point of measurement. Whereas over a long wire we will see an appreciable drop in voltage due to the resistance of the wire. We can also superimpose a digital signal atop the current signal such as HART, which allows sensors to give status updates as to how they are functioning. This process is much more challenging with voltage systems. The final advantage, is that the 4ma buffer at the bottom often allows for sensors to be self-powered requiring just 2 wires as opposed to 3 wire or 4 wire systems. However multi wire systems do exist for current transducers as they do for voltage systems.

After listing all these advantages, why would we ever use a 0-10 voltage system? There are a couple of practical advantages. You can check that a 0-10 voltage system is working without breaking the circuit. Ammeters must be wired in series, due to the repercussions of Kirchoff’s laws. Voltmeters can be measured in parallel, and so, if you’re trying to find that break in the wire that the 4-20mA system showed so easily, actually finding it becomes much easier with a 0-10 volt system.

The added bonus for troubleshooting is that most basic voltmeters can measure, with relatively high precision, 0-10 volts, whereas milli-amp precision is rarer, so this can be used to measure line voltage rather than a more expensive piece of equipment. However, using a 250 ohm resistor as a shunt a 4-20mA signal can be transformed to a 1-5V output, easily readable by off the shelf volt metres (or other ranges depending on resistor use), which somewhat negates the advantage.

They key advantage is the ubiquity of voltage systems, many devices and pieces of equipment such as tensile test rigs will have a 0-10 volt analogue output or input whereas 20 mA current outs are comparatively rarer. 0-10 volts is widely supported both by modern digital aquisition offerings as well as on legacy equipment. DAQ equipment is expensive and may not be replaced at regular intervals, meaning the likelihood of encountering 0-10 volts is extremely likely, even if an engineer would prefer to work with 4-20 mA.

National Instrument C9205 and C9203 cards, specialised in 0-10 volt measurement.

0-10 volts is also nice and straightforward to work with, as it is in base 10, it’s very easy to swap between 0-10 and the range of the measurand in your head, slightly harder to do with 4-20mA after the offset and 16 mA range. With modern software systems where calibrations can be applied practically in real-time, this advantage is essentially trivial when it comes to the ease of delivering the final system. It can, however, remain a tangible advantage for troubleshooting a system.

When designing a whole instrumentation or control system knowing that some devices are definitely going to use 0-10v, it may be considered advantageous to standardise on that type rather than having a mix of 0-10v and milli-amp based sensors. However, judging by our own practical experience when there may be bridge, thermo-couple (low voltage), 0-10v and milli-amp sensors integrated into a single system, the added complexity of multiple types is not insurmountable or prohibitive especially when it is planned for.

It is widely stated that milli-amp systems are more expensive that voltage systems. This used to be the case but a quick check of RS pressure transducers, or prox-probes, or other sensors will demonstrate that this is largely not true anymore. ADC’s that are set-up for current measurement are also largely the same cost.

An important factor to include in the trade-offs when planning a solution is the human element. Can a voltage system be delivered faster and at a higher quality due to familiarity with these kinds of sensor? Despite the ‘on-paper’ advantages of 4-20mA, there is nothing that can be done with 4-20mA that a system based on 0-10v can’t achieve. Extra lines can be run for power and for digital data, if this means that the system delivered is of high quality due to the experience of the team, rather than low quality due to inexperience with 4-20mA, then that can make crucial difference as to which type is chosen.

Today, it often comes down to the tastes and preferences of the engineer. The precision of both types is largely the same, and the advantages of 4-20mA haven’t caused an immediate and wholesale transition to that type. Whilst it is largely the default in the U.S., in Europe it is less common, although growing in popularity.

Have you got a job involving instrumentation systems? Flintmore has expertise and experience in both 4-20mA and 0-10volt systems and near anything in-between. Get in contact today.

Further Reading

To find out more about 4-20 mA systems within current loops, check out the wikipedia article here:

https://en.wikipedia.org/wiki/Current_loop

An absolute gem of a resource for instrumentation and control is instrumentationtoolbox.com, for this subject they have a series of articles about implementation and use cases of 4-20mA systems:

https://www.instrumentationtoolbox.com/search/label/4%20-%2020mA%20Signals