Today is...
Saturday, June 15, 2019
The OPC Community Forum.
ProfiNet or ProfiNot
Is this not the most contrived and glib attempt at "Industrial Ethernet" ever proposed?
By Jim Stewart on 16 May, 2001 - 3:50 pm

I just finished looking over the "draft" version of the ProfiNet specification from the PNO (Profibus Not-so Organization). I must say that it is remarkably unremarkable.

In order to avoid any conversational quagmires with respect to Industrial Ethernet as a whole, let us assume that the automation industry has a real need for Ethernet technologies visa-vi TCP/IP etc. What I see in ProfiNet is a rehashing of 5 year old technologies (at best) in a slightly different form.

Although I mostly agree with the overall vision of the integration of Engineering Applications (configuration tools, programming tools, runtime statistical analysis tools) into the Open Automation arena, the PNO clearly looses me from that point on. You really don't have to sweat the details to find some really glaring contradictions with the over all "Open Automation" objective. The most obvious one is the fact that ProfiNet engineering tools will use OLE/COM as the object model and they further state:

"The use of interface and communication standards that were developed in the Microsoft world does not necessarily mean that PROFInet is confined to Microsoft operating systems......"

then ONE paragraph later:

"....PROFInet Engineering Systems should therefore be conceived for the PC platforms with WINDOWS NT/2000 operating system."

Clearly an "Open Automation" system that is based on a closed proprietary operating system and that operating systems' technologies for distributed computing is less than ideal. Aren't we giving up a large amount of the benefits of "Open Automation". There are scores of purely Open, completely Committee based technologies for distributed computing based on component object principles. All of those technologies, besides being better object models by far, have implementations for Microsoft Windows platforms as well as dozens of other platforms. Sure, there are implementations of COM/DCOM that are not tied to Microsoft, but those only exist because the OPC foundation used COM/DCOM as their object model. At best, these are "catch up" copies of the original.

With all the choices for open interface, distributed computing technologies would COM/DCOM be chosen? Even Microsoft has abandon these technologies for enterprise business to business communications (because they really don't work in WAN implementations). Even SOAP would have been a better choice. But now that they have chosen COM/DCOM model, why create a new standard at all? Why not just use OPC?

Really what is the point here? Do they really want to make a truly modern, open and innovative automation platform or as quickly as possible pump out the same old technologies that have been kicking around for years thinly disguised as a NEW cutting edge technology?

By Jake Brodsky on 17 May, 2001 - 1:55 pm

So you too wonder about why things are done the way they are in the world of controls. So do I.

From a practical perspective, I really don't care whether the protocol is truly "open" or not. What I need is interoperability and competition so that manufacturer A can talk to manufacturer B on the same network. If I don't like the way A is behaving I can replace it with a similar box from manufacturer C.

For me open protocols are a means to this end. I'm alarmed at the reliance on one company in this regard. Microsoft is not the only game in town. I'm not suggesting that everybody demand new releases in Linux. Shucks, I'd settle for ANY other operating system --even a proprietary one.

Again, while "open-ness" is very nice, it's only a means to an end. The bottom line is interoperability and competition.

By Roger Irwin on 21 May, 2001 - 5:07 pm

> Jake Brodsky wrote:
> So you too wonder about why things are done the way they are in the world
> of controls. So do I.

Of course no major manufacturer actualy WANTS an open protocol that will allow small vendors to tap into their user base, or lay themselves open to competion. It is the customers responsibility to INSIST on openess for their own protection.

In IA communications products tend to have a very high degree of conformance to one of the plethoria of 'standards' out there. Unfortunately that does not assure interoperability as standards fall short of defing every element
required in a connection, there is always space to make a conformant device not interoperate with another conformant device, often in a conformant
manner!

Also, while manufacturers are quick to stick standards labels on their products, technical support frequently refuse to help resolve problems in a system where an alternative (compliant) product has been used for some
element in place of their own product. i.e. Connect foo engineerings PLC to foo engineerings DP encoder with baa engineerings capable, when things do not work (even if there is no sign of comms problems) the first thing they suggest is to replace baa engineerings cable with their own.

Mind you, who should help you de-bug a multi-vendor system?

That is why many people regard the only real open systems to be open source ones. It is the only way that eventual problems can be identified and
rectified.

By Alex Pavloff on 21 May, 2001 - 5:10 pm

> Mind you, who should help you de-bug a multi-vendor system?

This is a very good question with no clear answer. From my experience, the answer usually is "the manufacturer of the last thing you plugged in."

> That is why many people regard the only real open systems to be open
source
> ones. It is the only way that eventual problems can be identified and
> rectified.

I would agree, but little pieces of hardware with nothing more than a <insert Bus-Of-The-Day> interface do not lend themselves to open-source like operating systems and software running on a PC does.

By Curt Wuollet on 22 May, 2001 - 1:43 pm

Hi Alex

Perhaps the little pieces of hardware don't lend themselves to Embedded Linux, But ECOS is open source and is true "deep embedded" fare. From
what I've seen, the fieldbus stuff tends towards "big" processesors for easy development rather than the severe cost constraints of high volume embedded apps. With the outragious margins, embedded Linux could be competitive on say, Strongarm or Dragonball, perhaps even Geode
hardware. I have a goal for Ethernet IO running Linux on both ends. $10.00/point is easily doable with no software licenses or development tool overhead, even in low volumes. The mistake that is holding back fieldbus nodes is to start with
proprietary single sourced silicon and proprietary protocols where the IP licensing costs more than the device itself. When you look at the cost of all the leaches, it is impractical to embed a network stack in devices. Yet no one so far has realized that the economics of Open Source and commodity Ethernet silicon are about the only way to open the floodgates and move
things forward. It is hilariously ironic that the thing that is holding back massive fieldbus deployment is their own anal attitude that they must own or control everything. The market is so badly fragmented by these control freaks that new
additions are doomed before they start simply because they won't use what is established, cheap, and ubiquitous. Greed is it's own reward when you reinvent the world. If I can buy $9.00 Ethernet cards, no proprietary scheme is ever going to achieve the volumes to be competitive.

Regards

cww

By Alex Pavloff on 23 May, 2001 - 10:27 am

> The mistake that is holding back fieldbus nodes is to start with
> proprietary single sourced silicon and proprietary protocols
> where the IP licensing costs more than the device itself. When you look at
> the cost of all the leaches, it is impractical to embed a network stack in
devices. Yet no
> one so far has realized that the economics of Open Source and
> commodity Ethernet silicon are about the only way to open the
> floodgates and move things forward.
> It is hilariously ironic that the thing that is holding back massive
> fieldbus deployment is their own anal attitude that they must own or
> control everything.
> The market is so badly fragmented by these control freaks that new
> additions are doomed before they start simply because they
> won't use what
> is established, cheap, and ubiquitous. Greed is it's own
> reward when you reinvent the world.

They do have one point though. TCP/IP over Ethernet isn't always fast enough. Usually...

> If I can buy $9.00 Ethernet cards, no proprietary scheme is ever going
> to achieve the volumes to be competitive.

Oh, I agree, but I wouldn't even use Linux if I'm trying to get cheap IO. You could make a system with an ARM processor running ucLinux (or something) off of some flash. People have already got that though. I was at the embedded systems conference in SF last month, and while there was a hell of a lot of Linux folks, there were just as many folks adding TCP/IP stacks to
their already cheap hardware.

For an example: Check out www.zworld.com and look at their RabbitCore series. You could make a TCP/IP IO module with 34 IO lines, and the most
expensive unit is $89 for a quantity of 1! That's less then $3 a point for hardware costs (not counting the cost of packaging the devices or any margin). Sure, I have to use their special C as opposed to writing Linux, but if you want to use cheap hardware to make a cheap TCP/IP IO device, even Linux is too fat.

Of course, you have to spend $279 for the development kit, but that's quite reasonable, as I think even you would admit. :-)

By Jake Brodsky on 23 May, 2001 - 8:12 am

> Mind you, who should help you de-bug a multi-vendor system?

Nobody. Hell, I have difficulty getting vendors to admit there is a problem even with their equipment on all sides of the problem.

I'm in favor of published standards (though it may be a licensed standard) because it gives me something to point to besides an idiot light which says "comm fail."

> That is why many people regard the only real open systems to be open source ones. It is the only way that eventual problems can be identified and rectified.

Openness has many degrees of freedom. The old DEC VMS operating system wasn't "open," but the interfaces were well documented and you could obtain a licensed copy of the kernel code for modest cost. As a result it enjoyed years of user support that other operating systems couldn't seem to attract.

Likewise, we don't need a completely open design with completely open bits of code. What we need is a very tight, well defined standard interface. The downside is that I see no motive for a large company to do this for the control system community. A small company could do this, but they'd have to take a very enlightened approach to partnering with other firms. I don't see that happening any time soon either.

So we're left with efforts such as the Puffin PLC project. I sincerely hope they succeed. It would be nice to make the code base a sort of standard. It would be nice to have integrators build custom systems that others could work on instead of large companies packaging their wares for ridiculous mark-ups while hiding as many bugs as possible.

My point of the previous post was that while Open Source initiatives are a means to an end, it's not the only means to that end. Furthermore, we still don't know how well the open source iniatives will stand the test of time.

Linux itself didn't get much trade rag visibility until three or four years ago. It's just now penetrating the mainstream media. While I hope it stays there and grows, I still have doubts about its long term mission stability. I fear the risk of another fractured standards market as with other *IX versions of the past. I hope it never goes that far, but we have lots of past experience suggesting that it can.

By Ralph Mackiewicz on 23 May, 2001 - 9:35 am

> > So you too wonder about why things are done the way they are in the
> > world of controls. So do I.
>
> Of course no major manufacturer actualy WANTS an open protocol that
> will allow small vendors to tap into their user base, or lay
> themselves open to competion. It is the customers responsibility to
> INSIST on openess for thier own protection.

Yes EXACTLY. There is a lot of hand-wringing about all the proprietary stuff used in IA but it is mostly misdirected. Customers buy all that proprietary stuff. That is why it exists.

> In IA communications products tend to have a very high degree of
> conformance to one of the plethoria of 'standards' out there.

In the words of Bob Metcalfe, co-inventor of Ethernet: "Standards are great. Everyone should have one of their own." He was being sarcastic.

> Unfortunately that does not assure interoperability as standards fall
> short of defing every element required in a connection, there is
> always space to make a conformant device not interoperate with another
> conformant device, often in a conformant manner!

Such interoperability issues are inevitable in ANY standard. We are all human beings and it is not possible to define a *useful* comm standard (if the standard is too rigid it becomes too narrow in its scope...flexibility brings choices) that does not provide some choices to the developer. Not being conspiratorial in nature, and
based on long experience implementing public communications standards, anytime there are choices each developer will likely make different choices. Not to purposely foil interoperability, but due to simple human frailty.

> Also, while manufacturers are quick to stick standards labels on their
> products, technical support frequently refuse to help resolve problems
> in a system where an alternative (compliant) product has been used for
> some element in place of their own product. i.e. Connect foo
> engineerings PLC to foo engineerings DP encoder with baa engineerings
> capable, when things do not work (even if there is no sign of comms
> problems) the first thing they suggest is to replace baa engineerings
> cable with thier own.

This is a shame but it is not universal. You shouldn't put all 'manufacturers' in the same bag. My company has worked with numerous customers in working through interoperability issues that resulted in identifying bugs in our competitors product as well as our own. We have even made changes in our product to interoperate in spite of other people's bugs. If you are committed to standards dealing with
interoperability is a simple fact of life. Those manufacturers that you have this problem with are not committed to standards. Probably because most of their customer base is perfectly satisfied with a proprietary approach.

> That is why many people regard the only real open systems to be open
> source ones. It is the only way that eventual problems can be
> identified and rectified.

Its not the only way, it is one way. You can also have a committed vendor. You can also buy product source code in some cases (albeit not "open source").

Regards,
Ralph Mackiewicz
SISCO, Inc.

By Curt Wuollet on 25 May, 2001 - 4:13 pm

> Jake Brodsky wrote:
> > Mind you, who should help you de-bug a multi-vendor system?
>
> Nobody. Hell, I have difficulty getting vendors to admit there is a
> problem even with their equipment on all sides of the problem.
>
> I'm in favor of published standards (though it may be a licensed standard)
> because it gives me something to point to besides an idiot light which
> says "comm fail."
>
> > That is why many people regard the only real open systems to be open
> source ones. It is the only way that eventual problems can be identified
> and rectified.

This is a very, very, large benefit. Who hasn't had a project delayed because they had to search for answers or work around problems. I've even ran into dead ends and had to completely change approaches when information or solutions were not forthcoming. The project I'm doing now has flowed much faster and more steadily because I can fix problems as they come up. My stress level has been minimal and everybody's happier.


> Openness has many degrees of freedom. The old DEC VMS operating system
> wasn't "open," but the interfaces were well documented and you could
> obtain a licensed copy of the kernel code for modest cost. As a result it
> enjoyed years of user support that other operating systems couldn't seem
> to attract.
>
> Likewise, we don't need a completely open design with completely open bits
> of code. What we need is a very tight, well defined standard interface.
> The downside is that I see no motive for a large company to do this for
> the control system community. A small company could do this, but they'd
> have to take a very enlightened approach to partnering with other firms.
> I don't see that happening any time soon either.

I think possibly the best thing would be for the existing front runners to be "opened up" enough to enable other companies and even OSS projects to
use them without concern. The standard for "open enough" could easily be measured as the point where the effect is noticeable. I have asked Modicon for example, to grant the Puffin PLC project permission to use the Modbus protocols, particularly Modbus/TCP, in a manner consistant with our OSS goals. Not to abandon all rights or anything drastic like that, but simply to let LPLC use them in the same manner as I can as an individual without the threat of legal action. They have by far the most liberal license I have
seen in this business and are so very close to what we need that it's just language differences and the fact that we aren't a legal entity and so can't enter into agreements that separates us. I would be happy with a letter from an honorable company officer simply stating that they won't come after us. It's obvious that they want Modbus/TCP to become a standard the right way and are making it as easy as possible for others to adopt it. Even if we can't find a way, I still applaud them for trying to do something
that really needs to happen. We would like to reward them by adopting and popularizing the protocol if we can.

> So we're left with efforts such as the Puffin PLC project. I sincerely
> hope they succeed. It would be nice to make the code base a sort of
> standard. It would be nice to have integrators build custom systems that
> others could work on instead of large companies packaging their wares for
> ridiculous mark-ups while hiding as many bugs as possible.

Thank you!

> My point of the previous post was that while Open Source initiatives are a
> means to an end, it's not the only means to that end. Furthermore, we
> still don't know how well the open source iniatives will stand the test of
> time.

We are, however, abundantly familiar with the effects of the status quo. I don't see OSS as very risky in comparison. If people want to use it, it will always be there.and just as viable. How could it be invalidated?.

> Linux itself didn't get much trade rag visibility until three or four
> years ago. It's just now penetrating the mainstream media. While I hope
> it stays there and grows, I still have doubts about its long term mission
> stability. I fear the risk of another fractured standards market as with
> other *IX versions of the past. I hope it never goes that far, but we
> have lots of past experience suggesting that it can.

If it does, it will certainly be a loss for everyone. The GPL goes a long ways to prevent this from happening. The intense pressure for the Linux companies to make money is stressing the community ties. There have been those who thought they could have the benefits without the responsibilities. They are no longer with us. As the user base becomes more general and less idealistic there is the danger that greed and averice will undo what sharing and cooperation have done.

Regards

cww

By Michael Griffin on 18 May, 2001 - 10:03 am

I recently went to a Profibus seminar to see what was new. They talked almost exclusively about DP, some about PA, mentioned FMS, and gave a short spiel about Profinet at the end. If I read between the lines, my impression is that DP is doing well, they hope to sell a lot of PA, FMS is
dead or dying, and Profinet is going to be a solution looking for a problem.

The Profinet comittee has apparently decided that they don't want to implement it by tunneling the regular Profibus protocol inside TCP/IP packets (as some other people are doing). They want to use COM/DCOM because they see it as a means of connecting to the highest enterprise levels. Connections to lower levels will require some sort of gateway (possibly another PC running more software). The overhead presentations I saw showed PCs being connected up to enterprise level servers (e.g. MRP). They didn't show how to connect down to small machine controllers. Perhaps I have somehow gained the wrong impression, but I think the various members of the
Profinet comittee don't have a clear and consistent view of what Profinet is supposed to do for people.

I agree with you that COM/DCOM may be a bad choice to base Profinet on. My own reason for thinking this is that I suspect that it will prove to be a technological dead end. This isn't a problem for office computing where annual software upgrades are somehow considered to be acceptable. This is not acceptable or even feasible in most industrial applications. I would not want to be locked into an industrial network which I could not be confident was going to be a stable and compatible implementation for many years to come. Once machines of various ages and origin start talking to one another or to other systems, long term compatability becomes a major issue.

By Ronald H. Nijssen on 18 May, 2001 - 10:04 am

Is OPC not a subset of COM/DCOM? With a scope largely driven by HMI vendors. To connect a variety of Systems in the future, e.g. drives and other "intelligent" discrete devices, we need to go beyond the "Item-Read-Write" paradigm of OPC.

Isn't the real benefit of Profinet an Application Layer, running on different physical layers, that allows communication to be abstracted beyond any
vendors protocol and PLC paradigm.

I think that its a good idea to make system interoperability possible based on paradigms already available, the fact that these paradigms (in this case COM/DCOM) are going to be implemented on systems and modules that they never
were found on, e.g. PLC's, Remote IO racks, Motor starters etc, will make it a protocol that can exist regardless of Microsofts Operating Systems

Imagine, you buy a Machine from a vendor in Japan, the only thing he sends you is a "Type Library" of the properties that his Machine exposes on Profinet, you use that Type library on your own PLC (from another vendor) to create the interaction required and when the Machine from Japan arrives the integration is seemless. Can you think of any technology that will deliver
this today and can be accepted by most vendors?

Ronald Nijssen

By Jim Stewart on 18 May, 2001 - 12:24 pm

I think generally the things that are done with drives, plc's, whatever, are just data reads and writes anyway. Think about it...even a robot system is generally just given a recipe like structure. Nothing more than a read or write.

Of course looking forward, it would be nice to have more abstract, high level functionality. You ask what system could use a "typelib" system and have all sorts of equipment talk to each other? None! Nothing based on com/dcom or anything else currently implemented. They are defining a new application layer here, right? Or are you suggesting that the only component object technology/distributed computing system that could do what you what is made by Microsoft? I say look around the industry...most WAN infrastructure IS NOT COM/DCOM based or anything else Microsoft.

The point I was making has nothing to do with whether the overall idea of ProfiNet is good. As I pointed out, I generally agree with the overall plan. The question is why pick a technology that has proven to be less than completely functional and is owned by a private company?

I think the answer is that they wanted to slap something together quick and most people understood or already had COM/DCOM based stuff. I think some more thought and research would go along way...

By Michael Griffin on 21 May, 2001 - 5:13 pm

At 16:47 18/05/01 -0400, Jim Stewart wrote:
<clip>
>Or are you suggesting that the only
>component object technology/distributed computing system that could do
>what you what is made by Microsoft? I say look around the industry...most
>WAN infrastructure IS NOT COM/DCOM based or anything else Microsoft.
>
>The point I was making has nothing to do with whether the overall idea of
>ProfiNet is good. As I pointed out, I generally agree with the overall
>plan. The question is why pick a technology that has proven to be less
>than completely functional and is owned by a private company?

I guess the Profibus organisation will have to change their slogan: "PROFIBUS - The world´s leading vendor-independent, open-communication standard for automation in manufacturing and process control." They can't
claim to be "vendor-independent" or "open-communication" if they are going to base their standard on one vendor's (Microsoft) proprietary system.


>I think the answer is that they wanted to slap something together quick
>and most people understood or already had COM/DCOM based stuff. I think
>some more thought and research would go along way...
<clip>

The Profibus comittee has been promising Profinet for some time now, and it still looks like being far from finished yet. I think you are
correct, and that they wanted to get something out quickly. If they waited much longer, nobody would be interested (if it isn't too late already).
With every new bus, and every new standard that come along, I have one more reason to think that this industry is going down the wrong road.


**********************
Michael Griffin
London, Ont. Canada
mgriffin@odyssey.on.ca
**********************

By Curt Wuollet on 22 May, 2001 - 1:45 pm

Hi Micheal, Jim

Michael Griffin wrote:
> >Or are you suggesting that the only
> >component object technology/distributed computing system that could do
> >what you what is made by Microsoft? I say look around the industry...most
> >WAN infrastructure IS NOT COM/DCOM based or anything else Microsoft.
> >
> >The point I was making has nothing to do with whether the overall idea
> >of ProfiNet is good. As I pointed out, I generally agree with the
> >overall plan. The question is why pick a technology that has proven to
> >be less than completely functional and is owned by a private company?

It's simple, because Microsoft will pay them to do it and they are already in their pocket. Users be damned. Elegance and good engineering
have nothing to do with it. It's the power of a monopoly to make things happen their way. No visible ties or payoffs, things just always happen their way.

> I guess the Profibus organisation will have to change their slogan:
> "PROFIBUS - The world=B4s leading vendor-independent, open-communication
> standard for automation in manufacturing and process control." They can't
> claim to be "vendor-independent" or "open-communication" if they are going
> to base their standard on one vendor's (Microsoft) proprietary system.

And a poor system at that.

> >I think the answer is that they wanted to slap something together quick
> >and most people understood or already had COM/DCOM based stuff. I think
> >some more thought and research would go along way...

> The Profibus comittee has been promising Profinet for some time now,
> and it still looks like being far from finished yet. I think you are
> correct, and that they wanted to get something out quickly. If they waited
> much longer, nobody would be interested (if it isn't too late already).
> With every new bus, and every new standard that come along, I have
> one more reason to think that this industry is going down the wrong road.

What I don't understand is why they persist in using a failed model. Time after time after time they try the same failed approach that has lead
to a zillion "standards" and systems each with an insignificant market share. If they for one moment would look beyond short term profit and see what one shared truly Open Standard with most of the market would do for them, everyone would be much farther ahead. It's really paradoxical, the only thing they all have in common is a monopoly. hmmm....

It's a lot like the teenager who fails at school and never finds a job because they bend all their efforts to become a rock star. It statistically never happens but, boy, if it did!!! No one wants to develop the mundane pedestrian technology that everybody actually uses. Instead they spend all their time and resources ensuring that, if it ever does sell, they'll get every single nickle. Unfortunately in the process, they produce things that have zero widespread appeal and smell like sun-ripened fish.

Regards

cww

By Alex Pavloff on 18 May, 2001 - 2:13 pm

> Is OPC not a subset of COM/DCOM? With a scope largely driven
> by HMI vendors.To connect a variety of Systems in the future, e.g. drives and other
> "intelligent" discrete devices, we need to go beyond the
> "Item-Read-Write" paradigm of OPC

That defeats one of the major benefits of OPC -- that is, everything looks the same! I don't have to write any special code for a special device.

> Isn't the real benefit of Profinet an Application Layer,
> running on different physical layers, that allows communication to be abstracted beyond any
> vendors protocol and PLC paradigm.

We've got that already. Its OPC (love it or hate it)!

> I think that its a good idea to make system interoperability possible based
> on paradigms already available, the fact that these paradigms (in this case
> COM/DCOM) are going to be implemented on systems and modules that they never
> were found on, e.g. PLC's, Remote IO racks, Motor starters etc, will make it
> a protocol that can exist regardless of Microsofts Operating Systems

You mean you want ANOTHER protocol? Can I use TCP/IP? Or do I just have Ethernet? Or do I just hae 485? There are just too many options, and too many things out there that cost very different amount to put a standard protocol on everything. I say we accept this fact, divide our things up into several different types of devices and stop spending time futilely attempting a "one size fits all" approach to this. We've got serial devices that we're all pretty comfortable at wiring and using at this point. We can use TCP/IP over Ethernet and plug in more devices, and all the hard work is done for us already. We have various flavors of fairly fast remote IO over various other types of wires, and when we want to go to the Windows boxes, we've got OPC.

Looks to me like we've got a full toolbox.

> Imagine, you buy a Machine from a vendor in Japan, the only
> thing he sends you is a "Type Library" of the properties that his Machine exposes on
> Profinet, you use that Type library on your own PLC (from
> another vendor) to create the interaction required and when the Machine from
> Japan arrives the integration is seemless. Can you think of any technology that
> will deliver this today and can be accepted by most vendors?

Pick a protocol and do this over TCP/IP. What do we need Profi<something> for?

By Greg Goodman on 18 May, 2001 - 2:22 pm

> Imagine, you buy a Machine from a vendor in Japan, the only thing he sends
> you is a "Type Library" of the properties that his Machine exposes on
> Profinet, you use that Type library on your own PLC (from another vendor) to
> create the interaction required and when the Machine from Japan arrives the
> integration is seemless. Can you think of any technology that will deliver
> this today and can be accepted by most vendors?

No. I can't imagine that *any* technology will be accepted by "most vendors". For one thing, I believe that current business models still include strong disincentives to blur the distinctions between vendors' offerings. For another, I don't believe that a one-size-fits-all
solution is necessarily the best solution for every problem; there is some strength in diversity, and there will always be a justification for "non-standard" implementations. And, not least, I don't believe you can get that many people, with that many different perspectives, to agree on anything. It's hard enough getting all the vendors in a particular field or industry to agree on a common communications protocol or data model. Look what it's taken to get UCA widely adopted in the power
substation automation field, and even that's still far from universal.

I don't think that we will see a common generic platform-independent integration technology that is generally accepted by most vendors in most segments of the controls and automation industry.

By Ronald H. Nijssen on 21 May, 2001 - 9:33 am

<< I don't think that we will see a common generic platform-independent
integration technology that is generally accepted by most vendors in
most segments of the controls and automation industry. >>

Isn't this what profibus is about?

Ronald Nijssen

By Ronald H. Nijssen on 19 May, 2001 - 11:22 am

<< Alex Pavloff wrote:
<< That defeats one of the major benefits of OPC -- that is, everything looks the same! I don't have to write any special code for a special device.>>

Isn't it true that an OPC Client will have to manage the Items a Server exposes? I agree that an OPC Client can access ANY server but the backend application still needs an awareness, probably by browsing, what items are available and access these Items. This implies that a Drive may expose an item named "Speed" where this Item can not be found on a Motor Starter

<< We've got that already. Its OPC (love it or hate it)! >>

As far as I know the OPC protocol, which is COM/DCOM only works on Ethernet TCP/IP and in Microsofts Operating Systems, not on (existing) RS485 IO Networks.....

<< You mean you want ANOTHER protocol? Can I use TCP/IP? Or do I just have Ethernet? Or do I just hae 485? There are just too many options, and too many things out there that cost very different amount to put a standard protocol on everything. I say we accept this fact, divide our things up into several different types of devices and stop spending time futiley attempting a "one size fits all" approach to this. We've got serial devices that we're all pretty comfortable at wiring and using at this point. We can use TCP/IP over Ethernet and plug in more devices, and all the hard work is done for us already. We have various flavors of fairly fast remote IO over various other types of wires, and when we want to go to the Windows boxes,
we've got OPC.>>

Wouldn't it be great if the Protocol (read:Message-Format, not Physical layer) could run on different Physical layers (e.g. Ethernet and Profibus)

<< Pick a protocol and do this over TCP/IP. What do we need Profi<something for? >>

As far as I know, TCP/IP doesn't run over existing RS485 profibus cables (yet)


Ronald Nijssen

By Jim Stewart on 22 May, 2001 - 4:05 pm

> As far as I know, TCP/IP doesn't run over existing RS485 profibus cables (yet)

I have done it...transported tcp/ip over Profibus, that is. And my buddy did tcp/ip over controlNet. TCP(transport)/IP(Network) layers have little to do with physical layers and generally can be put on any data link and physical layers you'd like...

By Alan Brause on 25 May, 2001 - 3:35 pm

Cool trick.
Where do you do your wrap/unwrap?
On another processor?

Curious,

Alan Brause
Wideband Technologies
(520) 881-1737
http://www.widebandtech.com

By Jim Stewart on 26 May, 2001 - 5:41 am

> Cool trick.
> Where do you do your wrap/unwrap?
> On another processor?

I designed the SST Profibus line of interface cards (5136-PFB). So I had the advantage of being able to roll a lot of this functionality into the Profibus interface's firmware. The encapsulation took place on the Profibus cards (one on each end).

Jim Stewart
Celerox Digital Solutions
jstewart@celerox.com

By Curt Wuollet on 19 May, 2001 - 11:40 am

Hi Jake.
> Jake Brodsky wrote:
> So you too wonder about why things are done the way they are in the world
> of controls. So do I.

There is no mystery at all. Money uber alles. Anytime it doesn't make engineering sense, think greed and averice. Then it makes sense.

> >From a practical perspective, I really don't care whether the protocol is
> truly "open" or not. What I need is interoperability and competition so
> that manufacturer A can talk to manufacturer B on the same network. If I
> don't like the way A is behaving I can replace it with a similar box from
> manufacturer C.

Amazingly enough, even though I lead an Open Source project, I agree with you. The reality is that, without truly Open Protocols and Open
Source, it will _never_ come to pass. That's because our money is more important than you and I are. The only reason for companies to work
towards interoperability and connectivity is that it is extremely useful to the users. But, it you contrast that with the possibility that they
might be able to buy part of their system from somebody else, well, "we can't allow that". Imagine all the effort and resources that have been expended to avoid providing what you need. It's absolutely crazy from an engineering viewpoint to invent a whole new world instead of using something that's done and popular already.
It makes perfect sense only if you want to lock your customers in and keep them using your products and your products only. The rest of the
world uses Ethernet and TCP/IP with great success. It, at this point, is insane to think that _your_ proprietary protocol will serve people
better. But it's nowhere near as profitable to use Ethernet as it is than to pervert it and decommoditize it into something that is the same in name only.

> For me open protocols are a means to this end. I'm alarmed at the
> reliance on one company in this regard. Microsoft is not the only game in
> town. I'm not suggesting that everybody demand new releases in Linux.
> Shucks, I'd settle for ANY other operating system --even a proprietary
> one.

I'm all for choice also. The only way this will happen _is_ to demand new releases in Linux. Others would be nice too, but they are too far down the popularity scale (think dollars) to be attractive. Then, vote with your wallet. Partnering with MS is so lucrative with deliberate
obsolescence that anything better is irrelevent unless you demand it. I would be willing to pay a premium for a PLC with Linux tools as the up
front cost would be swamped by support cost reductions and hardware savings, not to mention intangibles like reliability and flexibility and
unmatched tools for the kind of things automation people do. I couldn't do what I do with the
tools you have to work with.
>
> Again, while "open-ness" is very nice, it's only a means to an end. The
> bottom line is interoperability and competition.
> How can I do that?

Make it an issue when you buy equipment, that is the only time you have leverage. Try refusing every proposal in the first round until you get
Linux support. They'll get the message loud and clear. And you could support the only organization truly working in the public interest in the automation field. www.linuxplc.org. Even if you don't want to use it, it represents the only real competition you can get behind. And we
need your help. You built the frankenstien, only you can change it.

Regards

cww

By Armin Steinhoff on 21 May, 2001 - 10:58 am

>Jim Stewart wrote:
>I just finished looking over the "draft" version of the ProfiNet
>specification from the PNO (Profibus Not-so Organization). I must say that
>it is remarkably
>unremarkable.
>
>In order to avoid any conversational quagmires with respect to Industrial
>Ethernet as a whole, let us assume that the automation industry has a real
>need for
>Ethernet technologies visa-vi TCP/IP etc. What I see in ProfiNet is a
>rehashing of 5 year old
>technologies (at best) in a slightly different form.
>
>Although I mostly agree with the overall vision of the integration of
>Engineering Applications (configuration tools, programming tools, runtime
>statistical
>analysis tools) into the Open Automation arena, the PNO clearly looses me
>from that point
>on. You really don't have to sweat the details to find some really glaring
>contradictions
>with the over all "Open Automation" objective. The most obvious one is the
>fact that
>ProfiNet engineering tools will use OLE/COM as the object model and they
>further state:
>"The use of interface and communication standards that were developed in
>the Microsoft world does not necessarily mean that PROFInet is confined to
>Microsoft operating
>systems......"
>
>then ONE paragraph later:
>
>"....PROFInet Engineering Systems should therefore be conceived for the PC
>platforms with WINDOWS NT/2000 operating system."
>
>Clearly an "Open Automation" system that is based on a closed proprietary
>operating system and that operating systems' technologies for distributed
>computing is less
>than ideal.

I fully agree !

>Aren't we giving up a large amount of the benefits of "Open Automation".
>There are scores of purely Open, completely Committee based technologies
>for distributed
>computing based on component object principles. All of those technologies,
>besides being
>better object models by far, have implementations for Microsoft Windows
>platforms as well as
>dozens of other platforms. Sure, there are implementations of COM/DCOM
>that are not tied
>to Microsoft, but those only exist because the OPC foundation used
>COM/DCOM as their object
>model. At best, these are "catch up" copies of the original.
>
>With all the choices for open interface, distributed computing
>technologies would COM/DCOM be chosen?

Good question :)

> Even Microsoft has abandon these technologies for enterprise
>business to business communications (because they really don't work in WAN
>implementations).
>Even SOAP would have been a better choice. But now that they have chosen
>COM/DCOM model, why
>create a new standard at all? Why not just use OPC?

OPC is based on COM/DCOM ...

>Really what is the point here?

They seem not to know real innovative concepts ??

> Do they really want to make a truly modern,
>open and innovative automation platform or as quickly as possible pump out
>the same old technologies that have been kicking around for years thinly
>disguised as a NEW cutting edge technology?

A truly open and innovative automation platform needs a truly open and innovative communication platform.

Such a platform could be based on e.g. the open MPI/RT (message passing interface/real-time) standard. This standard defines an object oriented
standard for the communication at the application layer ... exactly what we need for open distributed heterogenous control systems.

MPI/RT has its root from the (networked) cluster computing systems and is completely undependent from the transport layer. There are also MPI/RT
implementation for TCP/IP :) ... RT-CORBA based on MPI/RT is planned (mpi-softtech)

See: http://www.mpirt.org
- one of the implementors is http://www.mpi-softtech.com

Another open source alternative is PVM (parallel virtual machine)http://www.epm.ornl.gov/pvm/pvm_home.html
Could be combined with MPI/RT .... similar to PVMIP.

However ... ProfiNet + 'old fashioned concepts' = ProfiNot ???

Best Regards

Armin Steinhoff

By Roger Irwin on 22 May, 2001 - 5:05 pm

I would agree, but little pieces of hardware with nothing more than a <insert Bus-Of-The-Day> interface do not lend themselves to open-source like operating systems and software running on a PC does.

Little pieces of hardware with <bus of the day> generally come with little pieces of paper that explain a simple way to interface to them, perhaps
with a simple ASCII option and a few lines of code by way of example. Oh it may not be plug and play but all to often you will have it doing what
you need quicker.......

The problems tend to start when you connect little pieces of hardware to big pieces of hardware, whose manufacturer is a corporate/foundation member of
<bus-of-the-day.org>. When things do not work it cannot possibly be the little hardware boxes fault, surely.

By Roger Irwin on 22 May, 2001 - 5:06 pm

>Or are you suggesting that the only
>component object technology/distributed computing system that could do
>what you what is made by Microsoft? I say look around the industry...most
>WAN infrastructure IS NOT COM/DCOM based or anything else Microsoft.

My experience is that I have wasted a lot of time looking at LAN and WAN based industrial protocols, only to find that customers just do not want them. I think they are less naive than us. When you talk about 'standards' and 'interoperability' they just look at you blandly as if you were born yesterday and say "of course things never are compatible in pratice are
they, why don't you just send a few bytes of data in a UDP packet and stick the format in a .txt file so everyone knows whats in it and can use it". Quite.

Once you have one simple parser written for datagrams/ASCII hex on serial/TCP streams etc, it does not take much effort to modify it to another
format. And it works. And anyone can use. No drivers to order, no organisations to join...........Anarchy for industrial networking, thats what I say, lets picket the next OPC/fieldbus/thisnet/thatnet AGM......oops I am getting a bit carried away.

Seriously though, this is really what XML is all about, albeit in a slightly more formal container. Anybody who talks about sticking OPC packets in XML, automation data objects et al, should be taken out and shot at dawn.

By Jim Stewart on 23 May, 2001 - 9:59 pm

>
> My experience is that I have wasted a lot of time looking at LAN and WAN based industrial protocols, only to find that customers just do not want them. I think they are less naive than us. When you talk about 'standards' and 'interoperability' they just look at you blandly as if you were born yesterday and say "of course things never are compatible in pratice are
> they, why don't you just send a few bytes of data in a UDP packet and stick the format in a .txt file so everyone knows whats in it and can use it". Quite.
>
> Once you have one simple parser written for datagrams/ASCII hex on serial/TCP streams etc, it does not take much effort to modify it to another
> format. And it works. And anyone can use. No drivers to order, no organisations to join...........Anarchy for industrial networking, thats what I say, lets picket the next OPC/fieldbus/thisnet/thatnet AGM......oops I am getting a bit carried away.
>
> Seriously though, this is really what XML is all about, albeit in a slightly more formal container. Anybody who talks about sticking OPC packets in XML, automation data objects et al, should be taken out and shot at dawn.

I entirely agree! I prefaced my first posting with the disclaimer "Let's assume the Industrial Automation industry needs WAN....."

I really don't think the IA industry needs a lot of WAN infrastructure. But, interestingly enough, you mention text based application layers like XML used to create SIMPLE layer 6 and 7 protocols for automation; check this out:

http://www.factoryxml.com

By Joe Jansen/ENGR/HQ/KEMET/US on 30 May, 2001 - 9:43 am

>I really don't think the IA industry needs a lot of WAN infrastructure.
>But, interestingly enough, you mention text based application layers like XML used to create SIMPLE layer 6 and 7 protocols for automation; >check this out:
>
>http://www.factoryxml.com


From the web site:

International patents for FactoryXML are pending.

What, exactly, are they patenting? The use of XML in a factory? Does that mean I have to stop?

--Joe Jansen

By Curt Wuollet on 23 May, 2001 - 11:32 am

Hi Alex

Alex Pavloff wrote:

> > The mistake that is holding back fieldbus nodes is to start with
> > proprietary single sourced silicon and proprietary protocols
> > where the IP licensing costs more than the device itself. When you look at
> > the cost of all the leaches, it is impractical to embed a network stack in
> > devices. Yet no
> > one so far has realized that the economics of Open Source and
> > commodity Ethernet silicon are about the only way to open the
> > floodgates and move things forward.
> > It is hilariously ironic that the thing that is holding back massive
> > fieldbus deployment is their own anal attitude that they must own or
> > control everything.
> > The market is so badly fragmented by these control freaks that new
> > additions are doomed before they start simply because they
> > won't use what
> > is established, cheap, and ubiquitous. Greed is it's own
> > reward when you reinvent the world.
>
> They do have one point though. TCP/IP over Ethernet isn't always fast enough. Usually...
>
> > If I can buy $9.00 Ethernet cards, no proprietary scheme is ever going
> > to achieve the volumes to be competitive.
>
> Oh, I agree, but I wouldn't even use Linux if I'm trying to get cheap IO.
> You could make a system with an ARM processor running ucLinux (or something)
> off of some flash. People have already got that though. I was at the
> embedded systems conference in SF last month, and while there was a hell of
> a lot of Linux folks, there were just as many folks adding TCP/IP stacks to
> their already cheap hardware.
>
> For an example: Check out www.zworld.com and look at their RabbitCore
> series. You could make a TCP/IP IO module with 34 IO lines, and the most
> expensive unit is $89 for a quantity of 1! That's less then $3 a point for
> hardware costs (not counting the cost of packaging the devices or any
> margin). Sure, I have to use their special C as opposed to writing Linux,
> but if you want to use cheap hardware to make a cheap TCP/IP IO device, even
> Linux is too fat.

There's a couple of ways to look at it. First, I personally am not interested in the proprietary offerings as I intend to release the hardware design and software to the LPLC project and the public for free under the GPL or equivalent. I agree the rabbit looks like someone in the embedded industry finally got the volume religion and I wish them well. This in any volume would probably be cheaper as the hardware is cheaper. It's not that much cheaper though and with software development under Linux being almost trivial, I would be willing to pay the difference to have Open Source. Sure it's overkill, but it would still be cheaper than buying into any of the "open" industrial clubs and using their silicon and protocols. And it
would be a standalone Linux system. I could run the LPLC code on it and have a "micro" PLC replacement or even distribute tasks to it or let it concentrate data. My justification is that you would get a whole lot more functionality for a few more bucks. Sorta three in one to justify the extra hardware. And you would get the source with every unit.

Regards

cww

By Greg Goodman on 23 May, 2001 - 11:37 am

> based on long experience implementing public communications
> standards, anytime there are choices each developer will likely make
> different choices. Not to purposely foil interoperability, but due to
> simple human frailty.

i agree that, when choices are available, different developers make different choices, but i don't ascribe it to human frailty. different
developers have different understandings of the problem set, they operate under different constraints, and answer to powers that judge
them and their efforts using different criteria.

if everyone were supposed to make the same choice, then there would be no need to provide a choice in the first place.

the real trick, as a developer, is to know what choices you're making or providing, what the cost/benefit tradeoffs are, and to make all of those choices and their implications known to the people who need to know.


Greg Goodman
Chiron Consulting

By Curt Wuollet on 23 May, 2001 - 3:35 pm

Hi Roger

I've got an even better idea than that. Contribute the time you'ld spend writing a couple of one-shot definite purpose protos to help us with a truly open, free, publically owned, Ethernet based proto for the LPLC project. Then maybe next time around you could start with a solid framework and make a few trivial adjustments for the application. If enough people do that, it'll cover a lot of ground _and_ be well standardized yet easily extensible at need. We have someone, Ron Gage, that has started just that. Sharing a proto we all own that runs on commodity _or_ specialized hardware is an idea whose time has come. If people would quit bitching about the status quo and help us change it, the benefits expand at an exponential rate. All we need is comitted volunteers. Think about it, how many people does even an AB or Siemens actually have writing networking code? We can do a lot better than that if people really want things to change Waiting for change is pretty hopeless. This is such a limited market that a few committed people can actually put the industry back on track and break the Tower of Babel deadlock.

Regards

cww

By Armin Steinhoff on 29 May, 2001 - 1:44 pm

>Curt Wuollet wrote:
>Hi Roger
>
>I've got an even better idea than that. Contribute the time you'ld spend
>writing a couple of one-shot definite purpose protos to help us with a
>truly open, free, publically owned, Ethernet based proto for the LPLC
>project.

If you have only soft real-time requirements .... use PVM. It's based on message passing over TCP/IP and is widely used for networked computer clusters. This piece of software is really mature
und is used by many research institutes and industrial sites.

Computer clusters and distributed control systems are from the communication's point of view very similar (or nearly identical).

The PVM library offers blocking and non-blocking send and receive calls, event handling, networkwide synchronisations, message handlers (remote execution) and lots of other useful mechanismn ... and it is an 'open source' tool.

There are implementations for UNIXes, LINUX, QNX 6, Win98/NT a.s.o
( http://www.epm.ornl.gov/pvm/pvm_home.html
http://sourceforge.net/projects/pyqnx )

PVM isn't bound to a transport media ... its transport layer can also be implemented on top of e.g. CAN, LON, PROFIBUS ... that means applications can be moved from a TCP/IP environment to a fieldbus environment without
code changes.

> Then maybe next time around you could start with a solid
>framework and make a few trivial adjustments for the application.
>If enough people do that, it'll cover a lot of ground _and_ be well
>standardized yet easily extensible at need. We have someone,
>Ron Gage, that has started just that. Sharing a proto we all own
>that runs on commodity _or_ specialized hardware is an idea whose
>time has come. If people would quit bitching about the status quo and
>help us change it, the benefits expand at an exponential rate. All we
>need is comitted volunteers. Think about it, how many people does
>even an AB or Siemens actually have writing networking code?

They don't know probably how many mature networking code based on MPI or PVM are already used for distributed systems?

>We can do a lot better than that if people really want things to change
>Waiting for change is pretty hopeless. This is such a limited market
>that a few committed people can actually put the industry back on
>track and break the Tower of Babel deadlock.

Standards at the application layer like MPI, PVM, MPI/RT could be the tools to break the Tower of Babel ... at least a little bit :)

Regards

Armin Steinhoff