Linux Driver Standards (was BUSN: Blackout of 2003)

G

Thread Starter

Greg Goodman

> As someone who's writing Linux applications right now for a Linux HMI,
> I'll tell you one major reason why no one is using Linux -- there are
> basically _no_ drivers for any industrial automation hardware. In the
> Windows world, there are numerous companies that sell OPC and COM and
> .NET and other acronym-compliant drivers to talk serial, Ethernet, or
> whatever to PLCs and the like.


What application interface standard would you like drivers on Linux to be written to? What standard interface for I/O modules does your Linux-based application support?

Disparate implementations come first; standards follow, in response to a growing demand for interoperability.

Linux is in a position, with respect to device drivers, comparable to the position Windows was 10 years ago. There was no standard interface for device drivers, so every automation package provided its own. Whatever SCADA/HMI package you used, you had to have drivers for your equipment that were specific to that package. Remember the days when WonderWare could only talk to devices that somebody had written a WonderWare driver for?

That didn't really change until OPC came along. (DDE was there first, but wasn't really an answer to the "generic I/O driver interface" problem.) With OPC, drivers could be developed independently of the SCADA/HMI package because the driver didn't have to be built with an SDK specific to the SCADA/HMI package that would use it.

For now, Linux/Unix automation packages have lots of drivers; they're just not interoperable. RTAP, AutomationX, Modcomp Scadabase, AccessPoint, etc support dozens of industrial protocols and I/O cards. (They don't have drivers for absolutely everything, but they do support the protocols and bus cards their clients needed enough to make worth supporting. That'll come later, when an interface standard makes supporting absolutely everything a simple function of supporting a single interface.)

When there are enough SCADA/HMI applications deployed on Linux that there's value in writing inter-operable drivers, then it'll happen. And, assuming that it happens, it'll happen more quickly on Linux than it did on Windows because some significant part of the existing body of protocol implementations will be open source, easy to repackage in modules compliant with whatever standard interface evolves.

To some extent, the push for driver module interface standards is already underway. COMEDI applies to a particular class of I/O board, though not really to the various fieldbuses. Other standards will come.

My two cents,

Greg Goodman Chiron Consulting
 
T

Thomas Hergenhahn

I think that OPC is only a second best choice. It involves a lot of overhead in order to exchange some simple bytes.
IMHO, the driver for any PLC or other device could basically be reduced to two functions:

1.Read a block of bytes from the device.
2. Write a block of data to the device.

Some devices would need to open a logical connection.
So two helper functions to open and close such a connection are needed also. Open may return a unique handle that has no meanning for the application.

I know that there are many different ways how manufacturers organize the data inside their devices, but this could mostly be fit into a common structure like:

deviceAddress,
subDevAddr,
dataArea,
areaNumber,
offsetInArea

Any SCADA application could be adapted to use such library functions from a shared library and for any PLC-like device, such a library could be written.

Another standard will be necessary to tell the application how to calculate byte addresses from addresses in the device specific notation, what abbreviations to use, what endianess the device uses and whether byte addresses not on PLC address boundaries are possible.
This can be done in a text file shipped with the driver lib.

Just my two cents.
 
L

Lynn at Alist

Yes, I think this should be top priority - first create a very LOW API that allows simple read/write of bits and words. Something that could apply to Modbus, DF1, GE/SNP, PPI and others.

All of the "open source" I've seen so far tends to mush too much of the application function (and structured data) into the "driver" so that the driver cannot easily be reused - especially if a user needs to use different driver+app from several different projects.

- LynnL
 
T

Thomas Hergenhahn

Hello, Lynn,
Thank you for sharing my opinion.
I was prepared to be flamed for it.
for (Siemens) PPI/MPI, you may be interested in libnodave, (libnodave.sourceforge.net) a free library that implements this.
For AB, there is ABEL for ethenet (could not test it, I donßt have this equipment).
I've drivers for GE and AB DF1 in my project visual (HMI/SCADA).
Following your line of thought, I will provide the pure communication in separated libraries, when I find time to do the work.
Thomas

 
A

Alex Pavloff

Well, that sounds good for PLCs, but....

Ever try to talk to a motion controller before? Galil? Compumotor? Delta Tau? Indramat?

Numeric addresses? Offsets? Nope, you've got variables and a command language designed to be used from Hyperterminal. CRC? We don't need any CRC or packet or anything, just type MOTORSPEED=? and there you go!

While many PLCs do in fact follow something very close to the system you describe, the moment you get out of PLC land into the world of other devices, things become a lot more fuzzy, and about the only thing you can do is have extremely vague standards.

And you're right -- OPC is overkill to read a word from a Modbus device. However, it works, and all the user has to do is buy a faster computer, which, when you figure out the total cost of the machine, is a very small piece of the total.

Alex Pavloff - [email protected]
ESA Technology ---- www.esatechnology.com
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
 
C

Curt Wuollet

Hi Thomas

What I proposed for universal comms is even easier to use than that and user transparent. Jiri ( one of our founding MAT project programmers) was even kind enough to do a reference implementation.

A block of variables (registers) is simply mapped between processors and serviced in the background when changed. Accomplishes pretty much what is actually needed for automation. Nothing special need be done to use it and it's about as efficient and universal as can be. Works for IO, IPCs, synchronization, you know, automation stuff. The underlying mechanism could be shared memory on the same host or a very simple and efficient layer on TCP or UDP. Could be done with the bare minimum of resources even on smart IO devices.

Many manufacturers already support something similar, but of course, not between competing products. Even this most basic functionality would do away with an enormous number of kludges, workarounds and glue. And far more attractive than the complex, bloated, and politically charged alternatives. And no self-respecting manufacturer would be
incapable of doing it in a month at most. The interoperability problem has nothing to do with technology.

Regards

cww
 
J
OPC does have overhead compared to industrial networking protocols. So based on the measure of overhead OPC is the second choice. However, there are other aspects that weigh in heavier: connectivity and ease of use. This adds to the list of things a server must do.

Connectivity: The OPC solution isolates the HMI software from the unique complexities of all protocols and device types. Every protocol has different bus arbitration mechanisms, redundancy schemes, addressing schemes, and
timing etc. Describing this in a text file is near impossible. A software executable is required. Moreover, it is not sufficient to make a driver for a protocol alone. A network may have devices from several different suppliers, each one with data organized slightly differently. The driver needs to be loaded with additional information regarding the specific device types used, and the configuration in those devices. For good performance also add the task of "caching" to the list. That is why OPC is required.

The server must be able to execute on a different machine than the client. Therefore, clients need the ability to start and stop the server remotely as required. This is another task in the list.

Ease of use: It must be easy for the user to locate data in the server in order to put it on the screen etc., without having to know device address, files, memory register numbers, and bit positions etc. Therefore, any client must be able to browse any server name space to see what is "in there" and simply point and click to the parameter desired. Keep in mind that the server may be running in a different computer. So this is another task in the list of things a server must do.

Once you start looking at the different aspects of it I don't think it could be made very much more simpler than OPC.

I'm not a programmer, but what I think the Linux community should do is to develop proxies and stubs that can be used to connect to Windows OPC servers and clients. This way a Linux HMI has a chance of getting accepted since it can be used with the very wide range of OPC servers available. There are already software for some non-Windows operating systems that does this so obviously it must be possible.

Jonas Berge SMAR
==================
[email protected]
www.smar.com
 
R

Ralph Mackiewicz

On October 15, 2003, Thomas Hergenhahn wrote:
> I think that OPC is only a second best choice. It involves a lot of
> overhead in order to exchange some simple bytes. IMHO, the driver for
> any PLC or other device could basically be reduced to two functions:
>
> 1.Read a block of bytes from the device.
> 2. Write a block of data to the device. <

I think that most applications would rather deal with data than bytes. If we are talking about a device driver interface for an O/S kernel you are probably correct. If you are talking about an interface to an application, then dealing with bytes creates dependencies in that application on the specific byte ordering and how data is stored that complicates applications considerably and
makes it more difficult to separate the application from the device data representation. While OPC is probably a poor kernel device driver interface, it is a much better interface for applications. There are non-windows versions of OPC interfaces available from the Object Management Group and IEC61970-4 that could be used on Linux.

...snip...snip...

> I know that there are many different ways how manufacturers organize
> the data inside their devices, but this could mostly be fit into a
> common structure like:

...snip...snip...

> Another standard will be necessary to tell the application how to
> calculate byte addresses from addresses in the device specific
> notation, what abbreviations to use, what endianess the device uses
> and whether byte addresses not on PLC address boundaries are possible.
> This can be done in a text file shipped with the driver lib. <

A better approach for an "application" interface would be one that eliminates the necessity for an application to understand the organization and storage of data in devices. Instead, enable the application to discover the logical structure of data within the context of a data model that describes the data in the terms that the application uses the data. This approach is called a model-driven architecture. See http://www.omg.org/mda for a summary of this approach. OPC, and the related OMG and IEC versions of this interface, are compatible with this model-driven approach.

Regards,
Ralph Mackiewicz
SISCO, Inc.
 
M

Michael Griffin

I looked at the Sourceforge project and found an interesting link to IBHSoftec. Apparently this company has something called "IBH Link" (the link was present because libnodave supports this hardware).

http://www.ibhsoftec-sps.de/

(Enable javascript to use this web site).

The following are some quotes describing it.

"If you want to connect your PC via Ethernet just take the IBH Link. The IBH Link is a very small gateway integrated in a Sub D connector."

"With IBH Link online functions are possible via Profibus DP with up to 12 Mbit/s or via PPI/MPI. The IBH Link will reduce your costs because there is no need for the CP?s from Siemens nor the software Simatic Net is required."

The picture was of a Profibus D-shell connector with an ethernet cable coming out of the end if it. That is, this seems to actually be a DP/MPI/PPI to ethernet gateway in a connector. Supposedly it's cheap(er) as well than typical CP boards.

Is anyone familiar with this product? Would anyone care to comment on it? Helmholtz is also selling what appears to be the same product with their name on it (they call it "Net Link"). Neither company appeared to have any detailed information available their web sites. They also do not indicate whether the PC can be a master or just a slave.

I can however imagine quite a few uses for this item, if it really does work as advertised. Linux support is provided through the libnodave library.

--

************************
Michael Griffin
London, Ont. Canada
************************
 
M

Michael Griffin

As you point out elsewhere in your message, this sort of shared I/O or shared
memory is used by some manufacturers already. However, this doesn't really
answer the question though of writing a *standard* for creating drivers and
interfacing them with actual devices.

How does the driver exchange data with the application? What is the mechanism
used? You've told us how it looks logically to the (PLC) program, and how it
goes out on the wires, but what about the bit in between?

Michael Griffin
 
C
Hi Jonas

I wouldn't have a problem if that ease of use were not coincidental with a profound and deliberate refusal to support or even permit the use of anything else on MS platforms. That's kinda like it's easy to use the local power company or telco. Try using anything else :^) We need something like cellular to circumvent the enforced monopoly and stir up some competition. So you could "call" SCADA and enterprise systems without Microsoft tariffs.

Regards

cww
 
C
Hi Ralph

I would buy that argument if it were ever (to my knowledge) put into practice. Since, in the current market, it is precisely known what is running on the endpoints, and it must be so, I fail to see what good the extra baggage is or that the abstraction serves any purpose. When is the last time you ever bought an OPC driver that would run on any other platform with a need to actually qualify data? Everything I've seen makes a precise assumption of what you're running. And you don't
have any choice. It solves a non-problem with these assumptions, the manual can cover a singleton pairing with no particular problem. The theory is good, but the reality renders it moot.

Regards

cww
 
T

Thomas Hergenhahn

On October 21, 2003, Michael Griffin wrote:
> I looked at the Sourceforge project and found
>an interesting link to IBHSoftec. Apparently
>this company has something called "IBH Link"
>(the link was present because libnodave
>supports this hardware).
>
> http://www.ibhsoftec-sps.de/
>
> (Enable javascript to use this web site).
>
> The following are some quotes describing it.
>
> "If you want to connect your PC via Ethernet
>just take the IBH Link. The IBH Link is a very
>small gateway integrated in a Sub D connector." <

It is a gateway between ethernet and:
1. Profibus (I didn't ever test)
2. MPI, Siemens proprietary protocol, supposed to be an extension to and passed over Profibus.
(That is what I tested and what libnodave supports)
3. The PPI interface of the Siemens S7-200 family (I never tried)

> "With IBH Link online functions are possible
>via Profibus DP with up to 12 Mbit/s or via
>PPI/MPI. The IBH Link will reduce your costs
>because there is no need for the CP?s from
>Siemens nor the software Simatic Net is
>required." <

CP is "communication processor" in Siemens terminology.

> The picture was of a Profibus D-shell
>connector with an ethernet cable coming out of
>the end if it. That is, this seems to actually
>be a DP/MPI/PPI to ethernet gateway in a
>connector. Supposedly it's cheap(er) as well
>than typical CP boards. <

But you still have to have an application, that can understand, deal with and form itself the contents of the packets. IBH ships a (More?) windows dlls and programs with it which provide:

1. An interface to Siemens Step7 programming software
2. DDE and OPC servers (if I got it right)
3. A library in C, Pascal/Delphi and VB flavour which is in turn client to the DDE or OPC server to access the PLC from own programs.

> Is anyone familiar with this product?
>Would
>anyone care to comment on it? <

Yes, I do care as I use it in a plant of crucial importance at work. I had to redo control electronics from the early seventies and replaced it with an S7-315 and distributed I/O from third parties.

I was thinking about using a CP 343(-IT) to connect a computer to it. This should do:

1. Provide a small HMI application to do some settings.
2. Provide an overview HMI to eliminate the need for people walking about the plant ground and write down process data.
3. Collect some crucial data for the plant operation at a rate compareable to a paper recorder.

Point 3 was where I needed the maximum speed to read out signals from the PLC.
Nobody at Siemens could tell me, how far I would reach with the CP.
Then I became awre that it is connected to the serial backplane bus of the 300 at 187.5 kB.
And shares it whith the local I/Os.
Then I read about the BH-Link and ordered one.
I put upon the MPI connector, 187,5 kBaud also, but not shared with I/O. I did not try it on Profibus with up to 12MBaud, because I feared (cannot tell whether this is justified) that it could disturb (or slow down in a non predictable way) the distributed I/O communcation.
I read a block of 140 bytes 5 times per second.
(with libnodave code under LINUX)
This sounds not much, but is 4 times what I could get with MPI adapter on a serial interface.
MPI protocol involves some overhead and the CPU is setup (factory setting) do dedicate not more than 20% to communication. I should like to know whether a CP does better.

>Helmholtz is also selling what appears to be
>the same product with their name on it (they
>call it "Net Link"). Neither company appeared
>to have any detailed information available
>their web sites. They also do not indicate
>whether the PC can be a master or just a slave. <

It can only be a master on MPI and I suppose it's the (same with Profibus ?)

Thomas Hergenhahn
 
T

Thomas Hergenhahn

Hallo Curt,

either you missed me or I miss the point. This thread began as how to talk to existing equipment. And what you subsum in the term "serviced" has to be carried out by some code. Once this code exists, you may hide it from the application level so it appears to do it's service in the background.

And shared memory is only a way for two applications on the same machine both designed to use it.
 
C
Hi Michael

Just to prove this doesn't have to be complicated, let's establish say an 8k frame to put this stuff in. That's smallish by today's standards but one could use more than one with a little extra magic. Let's further establish an order of data types. All could support a byte, a word (register), a float and a blob. Without digging up my references, a TCP datagram has a payload of something like 240 bytes. We want to keep it simple, avoid fragmentation and not necessarily be bound to TCP so lets say we send 128 bytes of data with a few for data type type count, byte count, frame count, lcr, etc. We wouldn't need all that for TCP but would for UDP or ? By convention, we send in order from byte to blob.

So my machine establishes a connection and sends a null layout. On the recieving end we look for byte datagrams first, if we don't recieve any we move on up the order. If we receive one, we know we are going to have at least 128 byte types and set a pointer or however that machine keeps track. If we receive a second we are going to have 256, etc. When the next type appears or doesn't, we set it's space aside, etc until we have the layout established. Both machines now have a map and what the data is. This would be self discovering in the modern fashion. The user would then assign tags to the numbers and use them as normal variables. A write would mark the 128 byte block as dirty and set a flag for a transmit.

A blob is for strings, database records, or other things of variable length. It's number would remain the same for as many blocks as are assigned to that blob. They would be understood by convention as a C string, binary data, etc. and delimited.

Arbitration in this simple model could be as simple as being read only for the non establishing machine, requiring it to map a frame on the originator, or as complex as passing a token back and forth for writes. I myself would probably just send a format datagram for the layout, but folks like plug & play these days.

I could harden this up and document it as an Open RFC with a day or so of thinking. Any serious omissions or special needs could be added and the problem is solved. I could probably demo it between two of my Linux PLCs after a weekend. The block size would be a little large for serial protocols, but this should work well for Ethernet and other fast networks that can handle the datagram length. It's all payload so it could be tunneled, routed, folded. spindled. and mutilated across most anything like a network. But layered on top of standard sockets using standard Internet Protocols would be most useful. Establishing a socket to socket connection is already pretty much standardized.

And yes, there are probably gotchas and infelicities but, that's more progress in 10 minutes than has been made in years by consortia and working groups. I am not a super programmer, but I'll bet even I could write this for most any target that runs C. And as I mentioned, even this would solve lots of problems and enable lots of functionality. OH....And eliminate the majority of workarounds and kludges. Even 8 k of universal shared data would serve most needs. And it would scale easily.

Regards

cww
 
C
Hi Thomas

What I propose is a way to make it general and universal. True, it won't work on existing equipment, it's sort of a way to fix that, should the vendors ever really want to fix the status quo. My point is that it's trivial to fix a vast number of problems that people have to deal with, IF the desire to fix them is there.

Regards

cww
 
R

Ralph Mackiewicz

On October 22, 2003, Curt Wuollet wrote:
> I would buy that argument if it were ever (to my knowledge) put into
> practice. Since, in the current market, it is precisely known what is
> running on the endpoints, and it must be so, I fail to see what good
> the extra baggage is or that the abstraction serves any purpose. <

If the systems you work on are very simple, then providing access to data in the context of a model might be an overkill. Let me you give you a real world example: you want to calculate transformer ratings based on current temperatures and current loads. You have 1,000 transformers in your system. The data you need is stored in a SCADA data base with 300,000 real-time floating point values. How do you find the transformer loads? Using the typical "serves its purpose" approach you suggest its simple: you simply program all the tag names corresponding to the transformer loads into your application. Or, you build a big table that contains all the tag names. Either
way, what do you do when you add, change, or delete a transformer? By the way, this happens several times a month. With a model-driven approach, you build an application that can find all the transformers by searching the model. If you change the model by adding or deleting transformers, the application still works without
ANY change. If you replace your SCADA base with a different system, the appliation still doesn't change because you use the model to find the data. There certainly is an overhead in using a model, but it pays for itself many times over.

While the scale might be different, these same kinds of problems are no different in the automation world. Look at a plant with a few hundred machines. You need to build an application that needs detailed cycle times from all the machines to precisely predict plant output in the future. Every time the machine runs, the parts are different, the operations are different, the kinds of parts, operations, and tooling changes on an hourly basis. There are thousands of different combinations of machines, processes, tools, time, etc. You can solve this problem with the low overhead approach accessing data using the number of seconds in a given operation stored in R4001 in this machine and the number of milliseconds stored in R4023 in that machine and so on. There are many elaborate
schemes available for handling this complexity using front-end processors, data base translators, data transformations, MES, MRP, etc. In some cases the cost for solving the problem is so great that the user decides it isn't worth it. If the cost were much lower, the manufacturer would be able to accomplish a lot more.

There is an overhead, but there is real value in that overhead that extends way beyond what it takes to energize a hydraulic valve to cause a motion. To see the value you have to look beyond that narrow immediate view of motion control. One of the reasons that automation systems are difficult (and costly) to integrate on a large scale is
because of the lack of a model to describe it accurately and the use of control systems that are either too primitive to understand it or are so narrowly tailored that they are a barrier to integration instead of a tool.

> When is the last time you ever bought an OPC driver that would run
> on any other platform with a need to actually qualify data?
> Everything I've seen makes a precise assumption of what you're
> running. And you don't have any choice. <

If by "what you're running" you mean O/S, then, yes, OPC is currently Windows specific. As I pointed out, there are non-Windows specific definitions of OPC interfaces from both the OMG and IEC. And, with OPC XML you can achieve interoperability of OPC data sources with any platform including Linux.

Nearly all the OPC servers I have seen, and the OPC clients that talk to them, are self describing. The client (an HMI for instance) doesn't have to be preprogrammed to understand how a particular device represents or addresses data. A point for a screen is selected from a menu (the item browser). The HMI application doesn't care how bytes are arranged inside the controller. This abstraction is EXTREMELY useful. Without it, HMIs would cost a lot more than they do today. While OPC doesn't specify a specific model, OPC is compatible with a model-driven approach and is an excellent API for applications with only a modest amount of overhead.

> It solves a non-problem with these assumptions, the manual can
> cover a singleton pairing with no particular problem. The theory is
> good, but the reality renders it moot. <

If the only problem you are trying to solve is causing a motion, then anything else is just overhead. If the only problem you ever try to solve is how to cause a motion, you are only putting the overhead somewhere else where the overall cost could be much greater.

Regards,
Ralph Mackiewicz
SISCO, Inc.
 
M

Michael Griffin

The following is on the IBH Link profibus/MPI/PPI to ethernet gateway.

On October 22, 2003, Thomas Hergenhahn wrote:
<clip>
> Then I read about the BH-Link and ordered one.
> I put upon the MPI connector, 187,5 kBaud also, but not shared with I/O. I
> did not try it on Profibus with up to 12MBaud, because I feared (cannot
> tell whether this is justified) that it could disturb (or slow down in a
> non predictable way) the distributed I/O communcation. I read a block of
> 140 bytes 5 times per second. (with libnodave code under LINUX)
> This sounds not much, but is 4 times what I could get with MPI adapter on a
> serial interface. MPI protocol involves some overhead and the CPU is setup
> (factory setting) do dedicate not more than 20% to communication. I should
> like to know whether a CP does better.
<clip>

I can understand some concern about whether the backplane could be a bottleneck. If the IBH-Link could be a DP slave, then the S7-315-DP-2 has a DP master integrated into the CPU. I can think of a couple of applications that would interest me where this device would be useful, and in both cases it would seem simpler for the PC to be a slave.

I have one project in particular which I am doing research on now, that involves updating some PC based test equipment. I was thinking of adding an option for a profibus interface (the PLC would command the PC to conduct a test) and this seems like a nice low cost solution.

Another application I have given a lot of thought to would be where a PC monitors machinery for equipment performance statistics (similar to what you have described). In the scenario I have been considering, the PC would poll the PLC (or visa versa) quickly, but exchange a minimal amount of data. If the PC can communicate very quickly with the PLC, then only basic logic states would need to be exchanged (mode, cycle status, etc.) along with a few
words of other data (alarm words, etc.).

If the polling can take place at least 10 times per second (20 times would be better), then most of the performance analysis logic can be moved from the PLC to the PC. This is significant, because my research seems to indicate that minimising the enabling PLC programming is the key to minimising the overall project costs. As long as there are no per unit software licensing costs (Linux/Apache/PHP/Python) and the PC programming is spread over a number of identical units, then the PC doesn't dominate the overall costs.

The ethernet gateway interface is also attractive because this allows the use of any of a number of small of PCs which are now available. A PCI card always seems to add a lot of bulk to any of the package formats I have looked at. If the gateways can be DP slaves, then one PC could monitor several machines without connecting the different DP networks together.

> It can only be a master on MPI and I suppose it's the (same with Profibus
> ?)
<clip>

I didn't see any detailed information on IBH's web site. Is the technical information you got reasonably detailed, and what did you find to be the best source for it? I'm not sure if IBH is the actual manufacturer (since Helmholtz seems to be selling the same thing), or if they are who I should be contacting about this.

This was a bit of a long letter, but this subject interests me greatly.

--

************************
Michael Griffin
London, Ont. Canada
************************
 
Top