> As someone who's writing Linux applications right now for a Linux HMI,
> I'll tell you one major reason why no one is using Linux -- there are
> basically _no_ drivers for any industrial automation hardware. In the
> Windows world, there are numerous companies that sell OPC and COM and
> .NET and other acronym-compliant drivers to talk serial, Ethernet, or
> whatever to PLCs and the like.
What application interface standard would you like drivers on Linux to be written to? What standard interface for I/O modules does your Linux-based application support?
Disparate implementations come first; standards follow, in response to a growing demand for interoperability.
Linux is in a position, with respect to device drivers, comparable to the position Windows was 10 years ago. There was no standard interface for device drivers, so every automation package provided its own. Whatever SCADA/HMI package you used, you had to have drivers for your equipment that were specific to that package. Remember the days when WonderWare could only talk to devices that somebody had written a WonderWare driver for?
That didn't really change until OPC came along. (DDE was there first, but wasn't really an answer to the "generic I/O driver interface" problem.) With OPC, drivers could be developed independently of the SCADA/HMI package because the driver didn't have to be built with an SDK specific to the SCADA/HMI package that would use it.
For now, Linux/Unix automation packages have lots of drivers; they're just not interoperable. RTAP, AutomationX, Modcomp Scadabase, AccessPoint, etc support dozens of industrial protocols and I/O cards. (They don't have drivers for absolutely everything, but they do support the protocols and bus cards their clients needed enough to make worth supporting. That'll come later, when an interface standard makes supporting absolutely everything a simple function of supporting a single interface.)
When there are enough SCADA/HMI applications deployed on Linux that there's value in writing inter-operable drivers, then it'll happen. And, assuming that it happens, it'll happen more quickly on Linux than it did on Windows because some significant part of the existing body of protocol implementations will be open source, easy to repackage in modules compliant with whatever standard interface evolves.
To some extent, the push for driver module interface standards is already underway. COMEDI applies to a particular class of I/O board, though not really to the various fieldbuses. Other standards will come.
My two cents,
Greg Goodman Chiron Consulting
I think that OPC is only a second best choice. It involves a lot of overhead in order to exchange some simple bytes.
IMHO, the driver for any PLC or other device could basically be reduced to two functions:
1.Read a block of bytes from the device.
2. Write a block of data to the device.
Some devices would need to open a logical connection.
So two helper functions to open and close such a connection are needed also. Open may return a unique handle that has no meanning for the application.
I know that there are many different ways how manufacturers organize the data inside their devices, but this could mostly be fit into a common structure like:
Any SCADA application could be adapted to use such library functions from a shared library and for any PLC-like device, such a library could be written.
Another standard will be necessary to tell the application how to calculate byte addresses from addresses in the device specific notation, what abbreviations to use, what endianess the device uses and whether byte addresses not on PLC address boundaries are possible.
This can be done in a text file shipped with the driver lib.
Just my two cents.
Well, that sounds good for PLCs, but....
Ever try to talk to a motion controller before? Galil? Compumotor? Delta Tau? Indramat?
Numeric addresses? Offsets? Nope, you've got variables and a command language designed to be used from Hyperterminal. CRC? We don't need any CRC or packet or anything, just type MOTORSPEED=? and there you go!
While many PLCs do in fact follow something very close to the system you describe, the moment you get out of PLC land into the world of other devices, things become a lot more fuzzy, and about the only thing you can do is have extremely vague standards.
And you're right -- OPC is overkill to read a word from a Modbus device. However, it works, and all the user has to do is buy a faster computer, which, when you figure out the total cost of the machine, is a very small piece of the total.
Alex Pavloff - firstname.lastname@example.org
ESA Technology ---- www.esatechnology.com
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
Please... if we can create an easy solution that mets 70% of the application space (ie: PLC) but a dozen 'other' needs are not met - should we NOT settle the 70%? Should we just throw up our hands and say "oh, it's hopeless"? There will never be a 'prefect' API solution for all applications. For any proposed API, you or I could find a application that doesn't fit the API. That doesn't devalue the API.
I wasn't thinking of anything as complex as some other writers are suggesting - those could be a HIGHER level above this one. Just 1st phase a simple "int read_ints()" or "int write_ints" that could be mapped to Modbus/RTU or DF1/SLC500 style or SNP R% reads etc. read_ints takes a parameter structure rich enough to satisfy MOST PLC protocols and causes data to be loaded into a packed memory array pointed to as a parameter.
LE/BE would be implicit in your code - if your machine is LE & protocol BE, we'd need some #define to enable the appropriate swapping. At Digi our code tends to use the BSD style hton/ntoh type functions which are #def'd or null depending on the HW we're compiling on.
- LynnL, www.digi.com
Exactly. That would suit my needs. I merely use the register model as it's common to nearly every automation machine. It would be a good idea to use standard network ordering tools for sanity's sake. We'd simply be extending what *nix systems have been doing for decades to the new nodes on the net.
What I proposed for universal comms is even easier to use than that and user transparent. Jiri ( one of our founding MAT project programmers) was even kind enough to do a reference implementation.
A block of variables (registers) is simply mapped between processors and serviced in the background when changed. Accomplishes pretty much what is actually needed for automation. Nothing special need be done to use it and it's about as efficient and universal as can be. Works for IO, IPCs, synchronization, you know, automation stuff. The underlying mechanism could be shared memory on the same host or a very simple and efficient layer on TCP or UDP. Could be done with the bare minimum of resources even on smart IO devices.
Many manufacturers already support something similar, but of course, not between competing products. Even this most basic functionality would do away with an enormous number of kludges, workarounds and glue. And far more attractive than the complex, bloated, and politically charged alternatives. And no self-respecting manufacturer would be
incapable of doing it in a month at most. The interoperability problem has nothing to do with technology.
As you point out elsewhere in your message, this sort of shared I/O or shared
memory is used by some manufacturers already. However, this doesn't really
answer the question though of writing a *standard* for creating drivers and
interfacing them with actual devices.
How does the driver exchange data with the application? What is the mechanism
used? You've told us how it looks logically to the (PLC) program, and how it
goes out on the wires, but what about the bit in between?
Just to prove this doesn't have to be complicated, let's establish say an 8k frame to put this stuff in. That's smallish by today's standards but one could use more than one with a little extra magic. Let's further establish an order of data types. All could support a byte, a word (register), a float and a blob. Without digging up my references, a TCP datagram has a payload of something like 240 bytes. We want to keep it simple, avoid fragmentation and not necessarily be bound to TCP so lets say we send 128 bytes of data with a few for data type type count, byte count, frame count, lcr, etc. We wouldn't need all that for TCP but would for UDP or ? By convention, we send in order from byte to blob.
So my machine establishes a connection and sends a null layout. On the recieving end we look for byte datagrams first, if we don't recieve any we move on up the order. If we receive one, we know we are going to have at least 128 byte types and set a pointer or however that machine keeps track. If we receive a second we are going to have 256, etc. When the next type appears or doesn't, we set it's space aside, etc until we have the layout established. Both machines now have a map and what the data is. This would be self discovering in the modern fashion. The user would then assign tags to the numbers and use them as normal variables. A write would mark the 128 byte block as dirty and set a flag for a transmit.
A blob is for strings, database records, or other things of variable length. It's number would remain the same for as many blocks as are assigned to that blob. They would be understood by convention as a C string, binary data, etc. and delimited.
Arbitration in this simple model could be as simple as being read only for the non establishing machine, requiring it to map a frame on the originator, or as complex as passing a token back and forth for writes. I myself would probably just send a format datagram for the layout, but folks like plug & play these days.
I could harden this up and document it as an Open RFC with a day or so of thinking. Any serious omissions or special needs could be added and the problem is solved. I could probably demo it between two of my Linux PLCs after a weekend. The block size would be a little large for serial protocols, but this should work well for Ethernet and other fast networks that can handle the datagram length. It's all payload so it could be tunneled, routed, folded. spindled. and mutilated across most anything like a network. But layered on top of standard sockets using standard Internet Protocols would be most useful. Establishing a socket to socket connection is already pretty much standardized.
And yes, there are probably gotchas and infelicities but, that's more progress in 10 minutes than has been made in years by consortia and working groups. I am not a super programmer, but I'll bet even I could write this for most any target that runs C. And as I mentioned, even this would solve lots of problems and enable lots of functionality. OH....And eliminate the majority of workarounds and kludges. Even 8 k of universal shared data would serve most needs. And it would scale easily.
I didn't think your original explanation was complicated. Your reply is quite detailed, but it doesn't really answer my question. If you have an application program and a communications protocol driver, how do the two meet? How does the data get from the driver to the application?
You could simply provide a library and compile the driver into the application. However, this means re-compiling to add or change a driver. Many people would find this inconvenient or impractical (especially if they couldn't shut the program down).
The question isn't how you get the bytes out the ethernet (or serial) port, the question is how do you move the information between the application and the driver?
I would forsee you need to provide the following components:
1. Physical communications port.
2. Low level port driver (this may be provided by the O/S).
3. Protocol driver (e.g. modbus, profibus, etc.).
4. The application interface API (you suggested this as a register map).
5. The missing piece of the puzzle.
6. The application software.
The question is, what is #5? Whatever it is, it should preferrably be some existing standard rather than something new you had to invent for the purpose.
London, Ont. Canada
I'm not sure I understand what you're driving at. In the example of A PLC, the application would interact with these virtual registers and data types in the same manner as native registers. In a SLC 500, the ints would be INT registers and you could move, copy, increment etc in your ladder logic. Peer to peer this would work the same on both ends. On a Linux box you could use pointers or an array or what ever you like that can point to memory. In a S10 robot they would become Karel variables, an array would probably work best there and I don't recall, but I think you could associate them with enumerated types in Pascal fashion. All have some native way to deal with variables of various types. IO would probably work much like modbus where 16 outputs or inputs would be mapped to a register. The idea is not to have your missing link, but for the mechanism to be transparent. Each would require the parsing and mapping per the rules but this is mostly straightforward system programming. Let's consider the simplest case, TCP/IP on ethernet as just about everything has this transport/protocol available. In Linux it could all be done from userland, I believe, on other platforms it would vary but for example, the SLC could simply have another "file" type to provide this extension. Some sort of config record would provide the format, click on int once and float once and you would have 64 new INT registers and X new float registers (I'm not sure how AB stores floats offhand) Enter an IP address and assign labels, etc. as you build the rungs. Print the list of labels for use on the other end and you're ready to rock.
Imagine how many of the problems we see here would be solved if when you copy a value to a register on your 90/30 (so I'm not picking on AB exclusively) it simpy updates a register on your ABB or in my Linux program and vice versa. A great deal of the traffic here deals with various kludges and gateways and converters and whatnot to accomplish this same goal, often through the use of two dissimilar protocols or special ($$$) hardware and considerable head scratching.
By making it do only one very useful thing (like most *nix tools), you have a very well defined and straightforward job implementing on each platform and it's very simple to use. Define the format, give it an IP address and hit go. All it would take is a little cooperation.
On October 25, 2003, Curt Wuollet wrote:
> I'm not sure I understand what you're driving at.
There are different ways of connecting an application with a driver. OPC uses DCOM. A simple driver library could be linked (statically or dynamically) with the application by re-compiling and linking. There are other possible solutions. What would you do? What limitations would result? What classes of applications (SCADA, soft logic systems, etc.) would these limitations be acceptable for, and for which ones would they not?
London, Ont. Canada
Actually, OPC uses DCOM to remote the OPC client from the OPC server. If the OPC client and server are on the same machine, then DCOM is not needed.
This is more in the way of a service than a driver and would probably be done differently on each platform as part of the systems programming. As such it would likely be firmware on very small platforms and could be implemented on more elegant platforms in any number of fashions. On a PLC, configuring the service would put a hook in the scan loop to establish the connection on startup with other initialization tasks then check for data available and dirty blocks thereafter. Each vendor does something like this when a new card is configured and to add it to the scan. This would be very similar. On a system with a full featured OS it could be written as a daemon writing to a known address in the IO map or shared memory or whatever is most convenient. Or it could be done in the application itself for very closed platforms. So the interface would be native to the platform.
Limitations: All it does is make memory objects global. Not hard realtime. TCP/IP/Ethernet required. (For the simplest case) Might be too chatty for very dynamic IO.
Suitability: Suitable for all applications where this is what's needed.
Open Standards: Ethernet, TCP/IP ,Sockets, This protocol.
Actually. it's the antithesis of shoehorning a "do all be all" office automation scheme onto small simple hardware. That may well be suitable for high tag count SCADA or some few other situations where a PC that runs monopoly products is actually needed. Most of the traffic here is about simply getting a few numbers from a Foo 42 PLC to a Bar 666 PLC or a dozen register values to a PC program. Or vice versa. That should be built in and universal. And shouldn't pass through a MS tollbooth. The status quo, if it can be done at all, is way too expensive, complicated, time consuming and specific.
It would solve the vast majority of peer to peer communication needs and would eliminate needing a PC intermediary. Plug 5 PLCs or other controllers from various vendors into an inexpensive commodity hub or switch and they could exchange numbers or other basic data types seamlessly and almost without effort. They would just "be there". Many people are using Modbus, etc for this type of thing now. This would differ in being peer to peer and transparent with all the details hidden if desired.
This contrasts starkly with some of the bizarre schemes needed to simply pass a few numbers from one PLC to another, from very spendy coprocessors to dedicating IO lines to protocol convertors. This simply shouldn't be a problem for _us_ to solve in the age of ubiquitous and pervasive Ethernet.
I would be curious as to why this wouldn't work, given vendor cooperation :^)
> Just to prove this doesn't have to be complicated, let's establish say
> an 8k frame to put this stuff in.
Curt, the problem isn't coming up with a protocol. The problem is a lack of will to interoperate.
As long as there's no will to interoperate, it'll be Tower of Babel. Where standards are made, they will be clunky, 8-headed monsters that the vendors will subvert anyway.
If there were a will to interoperate, some standard would emerge.
The key to the will to interoperate is the very reasonable expectation
that products will interoperate. The Tower Of Babel is a very big issue
for a lot of people in isolation. The very nature of the problem is such
that everyone has to solve the same issues and waste lots of time and
energy each in their individual case. Almost every issue we see here has
been solved before many times and yet the solutions must be rediscovered
over and over again. If the scientific community worked this way, people
would be announcing the same discovery over and over and progress would
be nearly non-existant. The scientific community operates in a more open
fashion and as a result people can build on the discoveries of others.
Scientists are eager to publish and soon there are many others taking
advantage and last years discoveries are taken for granted.
An improvement on the status quo would be for folks on this forum to
post their solutions and reduce wheel reinvention. But, more to the
point, some things simply shouldn't be problems to be solved when you
pay very big bucks for some fairly modest electronics and the requisite
software to make it work. With ubiquitous networking in the general
computing field considered to be absolutely normal and a given, with
even appliances being routinely networked and even cars reaping the
benefits of embedded networks, Why should it be tolerated that our
tools be used in enforced isolation? You expect that any $300 computer
from any vendor will plug right on to your home network and will work
with your ISP. It would be considered junk and returned if it didn't.
There is no longer any reason that this shouldn't be the case with the
gear we use as it is certainly more important that the nodes on your
automation system work together than if your PDA can talk to your MP3
My point in running through how it could be done simply and efficiently
is to focus on the points that it won't happen if people accept the
burden of making it work themselves and that there is absolutely no
justifiable reason why they should. It should be a given, it should be
basic functionality and it should have happened a long time ago. But
the vendors will be very happy to continue selling us expensive junk
and acting in their best interests only as long as it isn't a "must
have" when the check is being signed. When it is a basic expectation,
it will happen. In the meantime, we can expect the announcement of
something called a wheel every week and see a dozen questions on how
to get around.
either you missed me or I miss the point. This thread began as how to talk to existing equipment. And what you subsum in the term "serviced" has to be carried out by some code. Once this code exists, you may hide it from the application level so it appears to do it's service in the background.
And shared memory is only a way for two applications on the same machine both designed to use it.
What I propose is a way to make it general and universal. True, it won't work on existing equipment, it's sort of a way to fix that, should the vendors ever really want to fix the status quo. My point is that it's trivial to fix a vast number of problems that people have to deal with, IF the desire to fix them is there.
OPC does have overhead compared to industrial networking protocols. So based on the measure of overhead OPC is the second choice. However, there are other aspects that weigh in heavier: connectivity and ease of use. This adds to the list of things a server must do.
Connectivity: The OPC solution isolates the HMI software from the unique complexities of all protocols and device types. Every protocol has different bus arbitration mechanisms, redundancy schemes, addressing schemes, and
timing etc. Describing this in a text file is near impossible. A software executable is required. Moreover, it is not sufficient to make a driver for a protocol alone. A network may have devices from several different suppliers, each one with data organized slightly differently. The driver needs to be loaded with additional information regarding the specific device types used, and the configuration in those devices. For good performance also add the task of "caching" to the list. That is why OPC is required.
The server must be able to execute on a different machine than the client. Therefore, clients need the ability to start and stop the server remotely as required. This is another task in the list.
Ease of use: It must be easy for the user to locate data in the server in order to put it on the screen etc., without having to know device address, files, memory register numbers, and bit positions etc. Therefore, any client must be able to browse any server name space to see what is "in there" and simply point and click to the parameter desired. Keep in mind that the server may be running in a different computer. So this is another task in the list of things a server must do.
Once you start looking at the different aspects of it I don't think it could be made very much more simpler than OPC.
I'm not a programmer, but what I think the Linux community should do is to develop proxies and stubs that can be used to connect to Windows OPC servers and clients. This way a Linux HMI has a chance of getting accepted since it can be used with the very wide range of OPC servers available. There are already software for some non-Windows operating systems that does this so obviously it must be possible.
Jonas Berge SMAR
I wouldn't have a problem if that ease of use were not coincidental with a profound and deliberate refusal to support or even permit the use of anything else on MS platforms. That's kinda like it's easy to use the local power company or telco. Try using anything else :^) We need something like cellular to circumvent the enforced monopoly and stir up some competition. So you could "call" SCADA and enterprise systems without Microsoft tariffs.
On October 15, 2003, Thomas Hergenhahn wrote:
> I think that OPC is only a second best choice. It involves a lot of
> overhead in order to exchange some simple bytes. IMHO, the driver for
> any PLC or other device could basically be reduced to two functions:
> 1.Read a block of bytes from the device.
> 2. Write a block of data to the device. <
I think that most applications would rather deal with data than bytes. If we are talking about a device driver interface for an O/S kernel you are probably correct. If you are talking about an interface to an application, then dealing with bytes creates dependencies in that application on the specific byte ordering and how data is stored that complicates applications considerably and
makes it more difficult to separate the application from the device data representation. While OPC is probably a poor kernel device driver interface, it is a much better interface for applications. There are non-windows versions of OPC interfaces available from the Object Management Group and IEC61970-4 that could be used on Linux.
> I know that there are many different ways how manufacturers organize
> the data inside their devices, but this could mostly be fit into a
> common structure like:
> Another standard will be necessary to tell the application how to
> calculate byte addresses from addresses in the device specific
> notation, what abbreviations to use, what endianess the device uses
> and whether byte addresses not on PLC address boundaries are possible.
> This can be done in a text file shipped with the driver lib. <
A better approach for an "application" interface would be one that eliminates the necessity for an application to understand the organization and storage of data in devices. Instead, enable the application to discover the logical structure of data within the context of a data model that describes the data in the terms that the application uses the data. This approach is called a model-driven architecture. See http://www.omg.org/mda for a summary of this approach. OPC, and the related OMG and IEC versions of this interface, are compatible with this model-driven approach.
I would buy that argument if it were ever (to my knowledge) put into practice. Since, in the current market, it is precisely known what is running on the endpoints, and it must be so, I fail to see what good the extra baggage is or that the abstraction serves any purpose. When is the last time you ever bought an OPC driver that would run on any other platform with a need to actually qualify data? Everything I've seen makes a precise assumption of what you're running. And you don't
have any choice. It solves a non-problem with these assumptions, the manual can cover a singleton pairing with no particular problem. The theory is good, but the reality renders it moot.
On October 22, 2003, Curt Wuollet wrote:
> I would buy that argument if it were ever (to my knowledge) put into
> practice. Since, in the current market, it is precisely known what is
> running on the endpoints, and it must be so, I fail to see what good
> the extra baggage is or that the abstraction serves any purpose. <
If the systems you work on are very simple, then providing access to data in the context of a model might be an overkill. Let me you give you a real world example: you want to calculate transformer ratings based on current temperatures and current loads. You have 1,000 transformers in your system. The data you need is stored in a SCADA data base with 300,000 real-time floating point values. How do you find the transformer loads? Using the typical "serves its purpose" approach you suggest its simple: you simply program all the tag names corresponding to the transformer loads into your application. Or, you build a big table that contains all the tag names. Either
way, what do you do when you add, change, or delete a transformer? By the way, this happens several times a month. With a model-driven approach, you build an application that can find all the transformers by searching the model. If you change the model by adding or deleting transformers, the application still works without
ANY change. If you replace your SCADA base with a different system, the appliation still doesn't change because you use the model to find the data. There certainly is an overhead in using a model, but it pays for itself many times over.
While the scale might be different, these same kinds of problems are no different in the automation world. Look at a plant with a few hundred machines. You need to build an application that needs detailed cycle times from all the machines to precisely predict plant output in the future. Every time the machine runs, the parts are different, the operations are different, the kinds of parts, operations, and tooling changes on an hourly basis. There are thousands of different combinations of machines, processes, tools, time, etc. You can solve this problem with the low overhead approach accessing data using the number of seconds in a given operation stored in R4001 in this machine and the number of milliseconds stored in R4023 in that machine and so on. There are many elaborate
schemes available for handling this complexity using front-end processors, data base translators, data transformations, MES, MRP, etc. In some cases the cost for solving the problem is so great that the user decides it isn't worth it. If the cost were much lower, the manufacturer would be able to accomplish a lot more.
There is an overhead, but there is real value in that overhead that extends way beyond what it takes to energize a hydraulic valve to cause a motion. To see the value you have to look beyond that narrow immediate view of motion control. One of the reasons that automation systems are difficult (and costly) to integrate on a large scale is
because of the lack of a model to describe it accurately and the use of control systems that are either too primitive to understand it or are so narrowly tailored that they are a barrier to integration instead of a tool.
> When is the last time you ever bought an OPC driver that would run
> on any other platform with a need to actually qualify data?
> Everything I've seen makes a precise assumption of what you're
> running. And you don't have any choice. <
If by "what you're running" you mean O/S, then, yes, OPC is currently Windows specific. As I pointed out, there are non-Windows specific definitions of OPC interfaces from both the OMG and IEC. And, with OPC XML you can achieve interoperability of OPC data sources with any platform including Linux.
Nearly all the OPC servers I have seen, and the OPC clients that talk to them, are self describing. The client (an HMI for instance) doesn't have to be preprogrammed to understand how a particular device represents or addresses data. A point for a screen is selected from a menu (the item browser). The HMI application doesn't care how bytes are arranged inside the controller. This abstraction is EXTREMELY useful. Without it, HMIs would cost a lot more than they do today. While OPC doesn't specify a specific model, OPC is compatible with a model-driven approach and is an excellent API for applications with only a modest amount of overhead.
> It solves a non-problem with these assumptions, the manual can
> cover a singleton pairing with no particular problem. The theory is
> good, but the reality renders it moot. <
If the only problem you are trying to solve is causing a motion, then anything else is just overhead. If the only problem you ever try to solve is how to cause a motion, you are only putting the overhead somewhere else where the overall cost could be much greater.
I pretty much agree with what you said, but it kinda reinforces my point. Your 1000 transformer management system and the machine maintenance application would both be far better served on a PC than any PLC type system I've seen so far. And yes, with the vast memory and tremendous amounts of compute resource on a desktop they would be trivial apps. Even the current trend towards horrific inefficiency and glop like VB can be tolerated.
But the tools used for definite purposes like automation, should be scaled and suited to the other end. PLCs don't have unlimited resources and efficiency matters. And very few PLC applications grow to the scope where this degree of overhead makes sense.
And supporting and reinforcing dependence on the monopoly is the way to stay planted in the past rather than moving forward to better methods. The recent IP unpleasantness should be clear warning of the consequences of embedding proprietary IP. Imagine ever trying to stay in business should Microsoft play games like SCO and call in their markers. This whole industry is owned by Microsoft as it operates only at their leave. If you don't believe that, try running without them, as I do. That's a very good reason in my book not to depend on anything they control. In any other context, this degree of exposure would be considered insane for most businesses. But it's somehow acceptable and people even recommend that _I_ do so, or even encumber our project in this manner. Must be something in the water, I don't see them as being that trustworthy.
Yes, I think this should be top priority - first create a very LOW API that allows simple read/write of bits and words. Something that could apply to Modbus, DF1, GE/SNP, PPI and others.
All of the "open source" I've seen so far tends to mush too much of the application function (and structured data) into the "driver" so that the driver cannot easily be reused - especially if a user needs to use different driver+app from several different projects.
Thank you for sharing my opinion.
I was prepared to be flamed for it.
for (Siemens) PPI/MPI, you may be interested in libnodave, (libnodave.sourceforge.net) a free library that implements this.
For AB, there is ABEL for ethenet (could not test it, I don▀t have this equipment).
I've drivers for GE and AB DF1 in my project visual (HMI/SCADA).
Following your line of thought, I will provide the pure communication in separated libraries, when I find time to do the work.
Cool! I look forward to interfacing to your, Greg, and Lynn's generic driver architecture from the MatPLC.
I looked at the Sourceforge project and found an interesting link to IBHSoftec. Apparently this company has something called "IBH Link" (the link was present because libnodave supports this hardware).
The following are some quotes describing it.
"If you want to connect your PC via Ethernet just take the IBH Link. The IBH Link is a very small gateway integrated in a Sub D connector."
"With IBH Link online functions are possible via Profibus DP with up to 12 Mbit/s or via PPI/MPI. The IBH Link will reduce your costs because there is no need for the CP?s from Siemens nor the software Simatic Net is required."
The picture was of a Profibus D-shell connector with an ethernet cable coming out of the end if it. That is, this seems to actually be a DP/MPI/PPI to ethernet gateway in a connector. Supposedly it's cheap(er) as well than typical CP boards.
Is anyone familiar with this product? Would anyone care to comment on it? Helmholtz is also selling what appears to be the same product with their name on it (they call it "Net Link"). Neither company appeared to have any detailed information available their web sites. They also do not indicate whether the PC can be a master or just a slave.
I can however imagine quite a few uses for this item, if it really does work as advertised. Linux support is provided through the libnodave library.
London, Ont. Canada
On October 21, 2003, Michael Griffin wrote:
> I looked at the Sourceforge project and found
>an interesting link to IBHSoftec. Apparently
>this company has something called "IBH Link"
>(the link was present because libnodave
>supports this hardware).
> The following are some quotes describing it.
> "If you want to connect your PC via Ethernet
>just take the IBH Link. The IBH Link is a very
>small gateway integrated in a Sub D connector." <
It is a gateway between ethernet and:
1. Profibus (I didn't ever test)
2. MPI, Siemens proprietary protocol, supposed to be an extension to and passed over Profibus.
(That is what I tested and what libnodave supports)
3. The PPI interface of the Siemens S7-200 family (I never tried)
> "With IBH Link online functions are possible
>via Profibus DP with up to 12 Mbit/s or via
>PPI/MPI. The IBH Link will reduce your costs
>because there is no need for the CP?s from
>Siemens nor the software Simatic Net is
CP is "communication processor" in Siemens terminology.
> The picture was of a Profibus D-shell
>connector with an ethernet cable coming out of
>the end if it. That is, this seems to actually
>be a DP/MPI/PPI to ethernet gateway in a
>connector. Supposedly it's cheap(er) as well
>than typical CP boards. <
But you still have to have an application, that can understand, deal with and form itself the contents of the packets. IBH ships a (More?) windows dlls and programs with it which provide:
1. An interface to Siemens Step7 programming software
2. DDE and OPC servers (if I got it right)
3. A library in C, Pascal/Delphi and VB flavour which is in turn client to the DDE or OPC server to access the PLC from own programs.
> Is anyone familiar with this product?
>anyone care to comment on it? <
Yes, I do care as I use it in a plant of crucial importance at work. I had to redo control electronics from the early seventies and replaced it with an S7-315 and distributed I/O from third parties.
I was thinking about using a CP 343(-IT) to connect a computer to it. This should do:
1. Provide a small HMI application to do some settings.
2. Provide an overview HMI to eliminate the need for people walking about the plant ground and write down process data.
3. Collect some crucial data for the plant operation at a rate compareable to a paper recorder.
Point 3 was where I needed the maximum speed to read out signals from the PLC.
Nobody at Siemens could tell me, how far I would reach with the CP.
Then I became awre that it is connected to the serial backplane bus of the 300 at 187.5 kB.
And shares it whith the local I/Os.
Then I read about the BH-Link and ordered one.
I put upon the MPI connector, 187,5 kBaud also, but not shared with I/O. I did not try it on Profibus with up to 12MBaud, because I feared (cannot tell whether this is justified) that it could disturb (or slow down in a non predictable way) the distributed I/O communcation.
I read a block of 140 bytes 5 times per second.
(with libnodave code under LINUX)
This sounds not much, but is 4 times what I could get with MPI adapter on a serial interface.
MPI protocol involves some overhead and the CPU is setup (factory setting) do dedicate not more than 20% to communication. I should like to know whether a CP does better.
>Helmholtz is also selling what appears to be
>the same product with their name on it (they
>call it "Net Link"). Neither company appeared
>to have any detailed information available
>their web sites. They also do not indicate
>whether the PC can be a master or just a slave. <
It can only be a master on MPI and I suppose it's the (same with Profibus ?)
The following is on the IBH Link profibus/MPI/PPI to ethernet gateway.
On October 22, 2003, Thomas Hergenhahn wrote:
> Then I read about the BH-Link and ordered one.
> I put upon the MPI connector, 187,5 kBaud also, but not shared with I/O. I
> did not try it on Profibus with up to 12MBaud, because I feared (cannot
> tell whether this is justified) that it could disturb (or slow down in a
> non predictable way) the distributed I/O communcation. I read a block of
> 140 bytes 5 times per second. (with libnodave code under LINUX)
> This sounds not much, but is 4 times what I could get with MPI adapter on a
> serial interface. MPI protocol involves some overhead and the CPU is setup
> (factory setting) do dedicate not more than 20% to communication. I should
> like to know whether a CP does better.
I can understand some concern about whether the backplane could be a bottleneck. If the IBH-Link could be a DP slave, then the S7-315-DP-2 has a DP master integrated into the CPU. I can think of a couple of applications that would interest me where this device would be useful, and in both cases it would seem simpler for the PC to be a slave.
I have one project in particular which I am doing research on now, that involves updating some PC based test equipment. I was thinking of adding an option for a profibus interface (the PLC would command the PC to conduct a test) and this seems like a nice low cost solution.
Another application I have given a lot of thought to would be where a PC monitors machinery for equipment performance statistics (similar to what you have described). In the scenario I have been considering, the PC would poll the PLC (or visa versa) quickly, but exchange a minimal amount of data. If the PC can communicate very quickly with the PLC, then only basic logic states would need to be exchanged (mode, cycle status, etc.) along with a few
words of other data (alarm words, etc.).
If the polling can take place at least 10 times per second (20 times would be better), then most of the performance analysis logic can be moved from the PLC to the PC. This is significant, because my research seems to indicate that minimising the enabling PLC programming is the key to minimising the overall project costs. As long as there are no per unit software licensing costs (Linux/Apache/PHP/Python) and the PC programming is spread over a number of identical units, then the PC doesn't dominate the overall costs.
The ethernet gateway interface is also attractive because this allows the use of any of a number of small of PCs which are now available. A PCI card always seems to add a lot of bulk to any of the package formats I have looked at. If the gateways can be DP slaves, then one PC could monitor several machines without connecting the different DP networks together.
> It can only be a master on MPI and I suppose it's the (same with Profibus
I didn't see any detailed information on IBH's web site. Is the technical information you got reasonably detailed, and what did you find to be the best source for it? I'm not sure if IBH is the actual manufacturer (since Helmholtz seems to be selling the same thing), or if they are who I should be contacting about this.
This was a bit of a long letter, but this subject interests me greatly.
London, Ont. Canada
> I can understand some concern about whether >the backplane could be a bottleneck. If the >IBH-Link could be a DP slave, then the >S7-315-DP-2 has a DP master integrated...
Don't know wheter IBH-Link can be DP Slave. Can only tell you that I use the DP master for distribibuted I/O only.
> I have one project in particular which I am >doing research on now, that involves updating >some PC based test equipment. I was thinking of >adding an option for a profibus interface (the >PLC would command the PC to conduct a test) and >this seems like a nice low cost solution.
I think this can equally well done by reading some flags which are interpreted as commands by the PC program (and writing back data and "command done" flags).
>times per second (20 times would be better), >then most of the performance analysis logic can >be moved from the PLC to the PC...
As I got about 5 reads per second, I suppose you will not get 20 with MPI. I should propose to store multiple samples of data in a data block in the PLC and then read 5 to 10 samples at in one transmission and write back some command that lets the PLC reuse the memory afterwards.
> The ethernet gateway interface is also >attractive because this allows the use of any >of a number of small of PCs which are now >available. A PCI card always seems to add a lot >of bulk to any of the package formats I have >looked at.
>If the gateways can be DP slaves, then one PC >could monitor several machines without >connecting the different DP networks together.
That can be done with MPI also:
Connect different IBH Links to one network hub.
> I didn't see any detailed information on IBH's >web site.
Thought it has benn there, but I cannot find it anymore. Maybe, Im wrong, because a CD that came with the device had a layout like the web site..
>Is the technical information you got >reasonably detailed,
No, it isn't. It is enough to use the device with the software shipped with it (under windows only).
>and what did you find to be the best source for >it?
Sniffing the bytes and comparing it to what I found out about Siemens protocols before.
> I'm not sure if IBH is the actual >manufacturer (since Helmholtz seems to be >selling the same thing), or if they are who I >should be contacting about this.
Yes and there are more. I think, the original manufacturere is Hilscher
(www.hilscher.de), because I found the name in some data sent by the device.
But there may be more then one type of firmware, because deltalogic (www.deltalogic.de) offers what I should consider the same hardware in two falvours, with or without the capability to use it with Step7 programming software.
subject interests me greatly.
If you want the manual( PDF format,English?, at least German, I do not have the disk at hand now) which came with the device send me a private mail. You can find my address on:
On October 27, 2003, Thomas Hergenhahn wrote:
(Re: Profibus/ethernet gateway in a D-Shell).
> Yes and there are more. I think, the original manufacturere is Hilscher
> (www.hilscher.de), because I found the name in some data sent by the
It appears that Hischer is the original manufacturer. They call it "Netlink" and they mention that they do custom OEM versions. It is likely that it is based on some of the chips and modules they also sell to OEMs.
It seems the correct web site address happens to be "hilscher.com", rather than "hilscher.de". The former does industrial communications, while the latter appears to make artificial legs.
> If you want the manual( PDF format,English?, at least German, I do not have
> the disk at hand now) which came with the device send me a private mail.
> You can find my address on: libnodave.sourceforge.net
Thank you for the offer, but at this time my interest is not serious enough to put you to that trouble. I will follow this up further with Hilscher or their distributors when I my needs for it are more immediate.
London, Ont. Canada
I found the link to the IBHNet-Link documentation,
but there seems to be only a german version.
The URL is:
This is what IBH ships with the device.
Is anyone interested in helping me create a simple "Recommendation" for a nice low-level API to be used to create sharable/reusable code for simple PLC protocols? We will NOT be creating any code, just a simple document & API (model?) for talking to things like Modbus, DF1, SNP, HostLink, etc.
Grand goal would be if I share some Modbus code that follows this doc & API, then someone else using the same API with say DF1 would be able to absorb & reuse my code with a minimum of effort to add Modbus. Target is small embedded system - not Windows class solutions & not DDL or linkable modules. No bloat, VM, or everything-is-32-bit assumptions.
Anyone interested, contact me at lynn at iatips.com (plus the @ in the right place ;^)
Again, this isn't a coding project - but I'll probably create a simple Modbus code for the API as test/proof/example.
- LynnL, www.digi.com