End-to-End Data Visibility: 5 Practical Examples

Insight from Ulf Persson, Director, Product and Solutions Marketing, Axway

There’s More to Life Than Saving Money

by Antoine Rizk
VP, B2B Program, Product and Solutions Marketing
Axway

New requirements constantly emerge for additional standards that are not yet supported (e.g., new networks, protocols, formats, etc.). Infrastructures that are too rigid simply break like the tall oak in the wind and need to be replaced.

New requirements constantly emerge for additional standards that are not yet supported (e.g., new networks, protocols, formats, etc.). Infrastructures that are too rigid simply break like the tall oak in the wind and need to be replaced.

Back in December 2008, a leading analyst predicted that, due to the then-burgeoning, now-stabilizing financial crisis, the major business case for many application and infrastructure initiatives was going to align with cost reduction.

And they were right. Nearly every customer I’ve spoken to corroborates the analyst’s prediction. But I submit that there’s more to life than saving money, even in lean times like these. So what, you may wonder, are those things, those compelling drivers for B2B/MFT projects besides getting things done on the cheap?

Five stand out to me.

  • Compliance: You must have the right auditing and logging information to comply with industrial, financial or legal obligations. In a way, you might see this as a continuation of the saving-money school of thought. After all, preventing fines and lawsuits is a money-saving activity, right? But there is a vast difference between spending too much on something and any amount on nothing, which is never an easy circumstance to accept.
  • Business Growth: Business growth often drives architecture refreshes, and rightly so: when the existing solution cannot efficiently handle the demands of the business, when the business outgrows its architecture to the point that no amount of ingenuity or reorganization can accommodate it, it can no longer cope with the demands of its trading partner community.
  • Business Risk and Loss of Data: When your current processes lack the control you need for guaranteed, once-only delivery, the result is a story of misfortune: missed SLAs, multiple payments made, orders lost, etc. And all of this directly impacts the bottom line.
  • Personnel Rollover: Developers of legacy, home-grown FTP solutions, like long-gone inhabitants of ancient civilizations, move on to new opportunities, and in their wake they leave behind little or no documentation on the tools and mission-critical data that their successors need in order to keep operations running smoothly.
  • Expanding Requirements: New requirements constantly emerge for additional standards that are not yet supported (e.g., new networks, protocols, formats, etc.). Infrastructures that are too rigid simply break like the tall oak in the wind and need to be replaced.

In my view, none of the above drivers are compelling enough in their own right to bring about a new infrastructure initiative without thorough proof of accompanying TCO reduction and ROI. This truth cannot be stressed enough.

Fortunately, reducing TCO can be done in many different ways. Consolidation, however, is by far the approach that brings the most value. B2B/MFT consolidation can be achieved by replacing multiple legacy and home-grown solutions with a single solution that is solid enough to maintain and even enhance performance, and covers all the required formats and protocols for the exchanges.

Consolidation can cover a variety of business cases:

  • Replacing on-premise FTP/home-grown file transfer infrastructure with a managed file transfer solution
  • Replacing on-premise product with a managed file transfer solution
  • Replacing several on premise multi-enterprise/B2B gateways with a single B2Bi platform
  • Replacing a VAN with an on-premise B2Bi solution
  • Replacing on-premise B2B gateway(s) with a B2Bi on-demand solution

As well as any combination of the above.

Finally, there is the ever-important ROI. Three ways consolidation brings you ROI include:

  • Reducing your IT costs: This includes personnel, planning, organization, acquisition, implementation, delivery and support, as well as monitoring/evaluation costs.
  • Reducing your business costs: This includes costs due to an error or a delay in message/file delivery, downtime hours, fees and penalties per missed SLA, audits, losing customers and data breach penalties.
  • Increasing your business value: This includes incremental revenues due to getting new products to market faster, being easy to do business with, reliable delivery and non-repudiation, increased customer satisfaction and loyalty, and being able to add new partners to your network.

When you increase business value, you increase your revenues and quickly bring better services to market. Your suppliers and customers can deal with you easily, and that brings about customer fidelity, satisfaction, loyalty and a host of other benefits too numerous to explore here. It’s easy and vitally important to take measures to save money, no doubt about it, but it’s the truly savvy business people who remember the essential nature of nourishing and flourishing business value, and it’s those business people who will, years from now, look for new ways to save money while their long-gone competitors fondly remember the business they were once in.

(Photo by zenera: http://www.flickr.com/photos/zenera/ / CC BY-SA 2.0)

Data Security in the Cloud

As companies look for more and more cloud-like services, where they gain a powerful ability to scale despite not owning the hardware, where they can simply pay for as much as they use and not a cent more, and where the availability of this terrific technology is ubiquitous, the number of connections that they have to manage becomes mind-bogglingly high, and yet, they’re entrusting their data to somebody else.

As companies look for more and more cloud-like services, where they gain a powerful ability to scale despite not owning the hardware, where they can simply pay for as much as they use and not a cent more, and where the availability of this terrific technology is ubiquitous, the number of connections that they have to manage becomes mind-bogglingly high, and yet, they’re entrusting their data to somebody else.

by Paul French
VP, Product & Solutions Marketing
Axway

Cloud computing is coming. And it’s coming fast and furious. Companies are going to start adopting cloud computing for things that appear to have mission-critical importance but maybe aren’t genuinely mission-critical, e.g., email. And companies like Salesforce.com are going to be there to provide those cloud computing options for countless companies.

But the challenge is this: As companies look for more and more cloud-like services, where they gain a powerful ability to scale despite not owning the hardware, where they can simply pay for as much as they use and not a cent more, and where the availability of this terrific technology is ubiquitous, the number of connections that they have to manage becomes mind-bogglingly high, and yet, they’re entrusting their data to somebody else. They’re taking on more responsibility than ever before, yet resting so much of their ability to fulfill their commitment to that responsibility on another party.

Madness? Brilliance? Maybe both?

When you have Amazon or Google as your cloud provider, or any of the increasing number of cloud providers who offer either full-on clouds or private clouds, how do you know that your data is secure and safe relative to everybody else’s data that it’s sitting alongside?

More importantly, how do you make sure that there is some level of governance applied to where the data resides and how your people get access to that data?

A famous analyst recently told me that “When somebody decides they’re going to leave their company, the first thing they do is download everything off Salesforce.com.” That’s true. You need to provide the same level of controls over any one employee’s actions—over any 10,000 employees’ actions—in the cloud as you would with your own on-premise solution. You need to have a cloud-compliant solution that lays on top of whatever sort of cloud you’re trying to leverage.

Anything short of having that is more madness than brilliance. But with that cloud-compliant solution, with an agent that puts the robust data security of pre-cloud-era governance back in your hands without having to compromise all the terrific advantages that cloud computing has to offer, brilliance eclipses madness. It is then that the promise of cloud computing can be realized without even the slightest asterisk to make you second-guess your decision to buy into it.

(Photo by pagedooley: http://www.flickr.com/photos/pagedooley/ / CC BY 2.0)

Zeus Continues to Wreak Havoc

(Note: The following is a repost of a blog entry that appeared on http://twitblogs.com/axway earlier this summer.)

by Paul French
VP, Product & Solutions Marketing
Axway

The new Zeus trojan is terrifying, to put it mildly. It’s really completely changed the way IT security professionals are thinking about FTP. It used to be that if you were doing business with Amazon or Bank of America or Cisco, you could rest assured that you were completely safe—you could trust that they had contemplated things like SSL connections and secure proxies. But now it’s clear that that’s not the case. So far, 90,000 FTP services from huge companies like Amazon and Bank of America and Cisco have been broken. And now, because of the way FTP works, and because of the way these companies store credentials, it is possible that that number could grow exponentially. Who knows how many FTP servers were connecting to Amazon and Bank of America and Cisco? That’s a really scary thought. And the fact that leading analysts have reached out and continued to bang the drum that FTP is not the answer should speak volumes. The only way that you can bring a serious level of security and reduction in your risk profile is to deploy a proper managed file transfer solution, one that doesn’t bring the negative associations of FTP, one that doesn’t store credentials that can be stolen or compromised by a trojan or a malicious or reckless employee.

What do you think? Are you comfortable doing business with any company that relies on FTP when transferrng sensitive files? If so, why?

(Photo by terren in Virginia: http://www.flickr.com/photos/8136496@N05/ / CC BY 2.0)

Compliance is to Security as Laws are to Morality

(Note: The following is a repost of a blog entry that appeared on http://twitblogs.com/axway earlier this summer.)

By Taher Elgamal
Chief Security Officer
Axway

July’s big security breach saw hundreds of thousands of account numbers compromised despite the fact that the host was compliant. Same thing with the Heartland breach that happened months ago. They were also compliant. And you can actually find a few dozen of these, smaller ones perhaps, where people spent millions of dollars with PCI and still had data breaches. Philosophically, where I stand on this is this: We invented compliance as a tool for businesses to be able to tell how well we are doing with our security. That’s the purpose of compliance. But somewhere down the line, compliance became the goal, not the tool. Our sole goal now is to merely be compliant with something! And, as it turns out, when you do that, you actually forget what you wanted to do in the first place—prevent leakage of account numbers, not just be compliant.

This applies in a lot of different areas, not just PCI. But I think PCI is a very good example of these issues. So people go through the PCI checklist, and there are twelve areas, and each area has several things, and they walk down one at a time and say, “Yes, I did this” and “Yes, I did that,” and they get a certificate. And, of course, two months later, half of the machines change configurations. New people came in, old people left. And you end up with a network that looks very different from the one that got certified.

But you can’t certify someone every day. The cost is already very high. There’s no way you can do anything more than the annual thing. And it turns out it’s becoming a pure cost, because people get certified and they still suffer through the breaches. But when you get one of these big breaches, you pay a lot of fines and fees, and it’s a very expensive proposition.

We need to start a conversation that says, “What we need to do is achieve a better level of security in our important networks.” And that implies that we understand what it is that we need to do, and that day-to-day management of important systems, machines and applications has to be implemented correctly. We’re not going to PCI certify every single thing all the time, but we need to basically carry the ideas from these compliance regulations in our daily activities because that’s how we manage correctly.

Honestly, that’s the only way you can achieve any level of security to survive.

Unfortunately, traditional security thinking here demands that we look at PCI and other standards as cures, silver bullets to fix things. And the entire industry is now thinking that that’s the wrong thing to do, because there’s not ever going to be a single silver bullet. It’s really about day-to-day management of things. We need to steer people away from thinking that “Maybe PCI is the wrong thing. Let’s look for the right thing.”

There is no such thing. The right thing is to go back to basics. Have the right security policies in place. Make sure you have a team and a head of security that understand the issues. Do day-to-day management. Self regulate. Have the team validate what they’re doing. Forget about the silver bullet. There will never be a technological solution that fixes the security issue. Ever.

And the security issue changes constantly because the ways hackers breach these systems actually change with time. It’s not about closing the old hole so that people can find new holes to get through. It’s how you build an ongoing scheme.

What do you think? Are the big breaches of 2009 anything less than quintessential examples of organizations trying to do the right thing but forgetting that the tool and the goal are actually completely different issues? Can’t it be said that compliance is to security as laws are to morality?

What Problems Will IT Organizations Face Over the Next Few Years?

Insight from Axway CTO Dave Bennett.

BPM Visibility Paves the Road to Operational Excellence

by Sylvain Astier
Product Manager, BPM
Axway

While going through my stuff, I found a copy of “The Scorecard Methodology,” an old HBR article (circa 1992) by Kaplan and Norton in which they emphasize performance measures in key areas of every organization.

A BPMS is simultaneously the rearview mirror allowing you to understand what happened, the windshield through which you view what is about to happen, and the steering wheel empowering you to modify and adapt your course of action.

A BPMS is simultaneously the rearview mirror allowing you to understand what happened, the windshield through which you view what is about to happen, and the steering wheel empowering you to modify and adapt your course of action.

It got me thinking about visibility. A strategy-focused organization can use the balanced scorecard approach to track how well it performs in regard to its different goals and objectives. The general idea is that organizations get what they measure. For instance, focusing on short-term financial performance most likely would lead to a disaster—what you might call the Lehman Brothers effect.

This strategy determines what an organization must achieve and how it must achieve it. If you don’t know where your organization’s at, you won’t know how to take it anywhere, and though this has been known since ancient times when warfare theories focused on a thorough understanding of topography and logistics, it seems that modern organizations have only begun to understand the importance of mapping their own courses of action. With a proper map of its processes, an organization can better align its operational reality toward its strategic goals. In this regard, Business Process Management Systems (BPMSs) are extremely powerful, as they allow process automation and offer visibility on how an organization performs in its overall value creation network.

In fact, BPMSs can also provide visibility without automating anything, simply by consolidating flows of events. For instance, probes can be used to fetch information from legacy applications and generate events, which are consolidated by a BPMS providing visibility on parts of process instances about which one has very little information. Another important usage of non-automated processes is the control of events coming from business partners, ensuring that every collaboration’s participant provides the appropriate information at the right time (and in the right format) as defined per the service level agreement.

BPMSs make many aspects visible, most notably these two: the proper state of process instances and the different variables associated with each step, such as its cost or completion time. Hence, BPMSs can help predict the future state of an organization based on its current situation. For instance, BPMSs can help identify a potential bottleneck before it arises, and can easily correct it through something called “dynamic resource re-affectation.” BPMSs can also provide real-time visibility on specific customer cases and answer important questions (e.g., “Where is my order?”), ease human work and interactions, and identify who is responsible for what and who did what. A BPMS is simultaneously the rearview mirror allowing you to understand what happened, the windshield through which you view what is about to happen, and the steering wheel empowering you to modify and adapt your course of action.

This brings me back to Kaplan and Norton’s point: you get what you measure. BPMSs are fantastic tools, but if you focus on the wrong objectives, BPMSs will help you reach those wrong objectives in an efficient manner! However, at the same time, as BPMSs provide real-time visibility, you might easily re-assess these wrong objectives, adapt your organization’s behavior accordingly, and realize a broad spectrum of opportunities that had been unthinkable until now.

(Photo by stevelyon: http://www.flickr.com/photos/chicanerii/ / CC BY-SA 2.0)

Raise Consciousness—Not Frustrations—With DLP

by Willy Leichter
Director, Product & Solutions Marketing
Axway

I want to be the voice of the realist here, because there’s a lot of hype around DLP, yet DLP implementation projects have often failed because it can be such a complex, daunting undertaking.

Nine times out of ten, once you start content filtering, the incident rate drops dramatically, because employees don’t want an alert and they don’t want to be flagged by IT.

Many organizations put limits on the size of the files their employees can transmit, but they don’t tell the employee what to do if they need to send something larger. This leads to a lot of well-intentioned employees coming up with ways to send large files and get their jobs done.

As numerous bleeding-edge companies pilot DLP projects, they find it can be an enormous challenge. When you consider all the possible ways information can leak, most organizations resemble Swiss cheese. How do you plug up all those holes?

That question leads to a couple dilemmas. One, if you lock things down well, then you shut down business, an effect worse than the problem you’re trying to solve.  Two, the business-unit owner is supposed to own the information, but since IT is usually running the DLP products, they’re put in the position of being the bad guys who force the business-unit owner to confront issues that they do not want to confront.

To keep this from being too overwhelming, I recommend a couple of basic starting points that will maximize your DLP effectiveness while minimizing your frustrations.

First, instead of trying to boil the ocean, look for things that have become established best practices and the information that is most critical for your organization to protect. This doesn’t include all possible things that might need protecting, just really top-of-the-list information. And often, that’s regulated information. Healthcare data is obvious. But, PCI compliance, credit card numbers, social security numbers—those are the things that, more and more, you need to be concerned about. In a vast orchard of data, this is low hanging fruit!

Second, protect the obvious egress point: email. There are pretty straightforward steps that almost all organizations should be taking to prevent well-intentioned employees from sending sensitive information—often by accident—into the clear, specifically credit card, social security, and healthcare numbers. This is relatively easy and it addresses the most common way people accidentally send information.

Third, think of DLP in the broader case. Trying to thwart someone intent on stealing is much more challenging than preventing mistakes. Focus first on preventing well-intentioned employees from making stupid mistakes, like accidentally copying files that can cause liability, or setting up rogue FTP servers.

Many organizations put limits on the size of the files their employees can transmit, but they don’t tell the employee what to do if they need to send something larger. This leads to a lot of well-intentioned employees coming up with ways to send large files and get their jobs done. A lot of serious incidents have happened this way—employees copying CDs and DVDs, employees sending things over Yahoo accounts to their own computers, employees sending discs via snail mail. IT must not ignore the “What do we do if we don’t allow them to use email to send big files?” question. IT must provide mechanisms that allow employees to reliably send large files rather than just block those files and say no.

It’s important that you raise awareness within the organization that you’re taking steps regarding these issues. If you’re filtering stuff, you should raise alerts and make people aware why you’re taking these steps. Nine times out of ten, once you start content filtering, the incident rate drops dramatically, because employees don’t want an alert and they don’t want to be flagged by IT. They want to comply. These practices tend to train the group better than actual training. And what better proof of an initiative’s success than consciousness being so fundamentally raised that, after a time, the ongoing need for the initiative becomes largely moot?

Three Reasons Infrastructure Consolidation Yields Unparalleled IT Savings

by Bernard Manouvrier
Chief Architect
Axway

infrastructure consolidation creates a marked reduction in complexity, which minimizes the number of errors committed by both humans and machines alike, and accommodates staff without demanding a steep learning curve, which translates into minimized training costs.

Infrastructure consolidation creates a marked reduction in complexity, which minimizes the number of errors committed by both humans and machines alike.

I’m often asked by IT directors and IT professionals in general, “What can I do to save money in my department?”

It’s a simple question, but a particularly important one for IT, because while other departments have opportunities to both earn and save money, IT is generally relegated to the unenviable position of appearing to be an expensive tool for the money earners, to only cost a company money, to only earn money for the company indirectly at best.

Unfair? Absolutely. But true. Yet there are initiatives IT can carry out to compensate for this inequity. And of these initiatives, few, if any, conserve precious resources and minimize the demands put on IT’s time like a consolidated infrastructure.

Here are three ways it does this.

First, infrastructure consolidation reduces direct costs in several important areas, which leads to a veritable chain reaction in savings, a dramatic staunching of the outflow of cash. The IT department requires less hardware when the infrastructure is consolidated, and therefore requires fewer software licenses. This yields a leaner network, which means fewer personnel are required to keep the network in order, and fewer managers are required to keep the personnel in order! The windfall of savings couldn’t be easier to imagine.

Next, infrastructure consolidation reduces risks and the resulting costs incurred when risks escalate into catastrophes. This is because infrastructure consolidation creates a marked reduction in complexity, which minimizes the number of errors committed by both humans and machines alike, and accommodates staff without demanding a steep learning curve, which translates into minimized training costs.

Finally, change management could not be more accommodated by a consolidated infrastructure. Do you want to re-engineer your processes and applications? A consolidated infrastructure facilitates the process by reducing the constraints attached to the development technologies, as well as reducing the constraints attached to the deployment of the processes and applications. Do you want to upgrade your hardware and software? A consolidated infrastructure eases the process by reducing the number of platforms and software packages that must be upgraded and the potential downtime associated with this process. Do you need to scale due to a merger or acquisition? A consolidated infrastructure expedites the process by reducing the complexity of linking the IT environments of the merged companies, as well as reducing the technical constraints for integrating their applications.

What do you think? If an IT person were to ask you, “What can I do to save money in my department?”, what answer would you give them?

(Photo by pmarkham: http://www.flickr.com/photos/pmarkham/ / CC BY-SA 2.0)

Some Industries Can Benefit from the Anti-Counterfeit Measures of Other Industries

by Kim Loughead
Director, Product & Solutions Marketing
Axway

The good news is that many of these industries are tackling the problem head-on, and what they have learned can be applied to the control valve market as well.

One of the key learnings across these industries is that sharing information with partners is critical to gaining control over your supply chain.

Valve Magazine, “the only magazine for users, specifiers, buyers and distributors of North American valves and related products,” published an article last month addressing the problems associated with counterfeit or suspect products in the control valve industry.

Suffice it to say, the control valve industry is in good company. Many manufacturing industries suffer from the same problems: unscrupulous suppliers peddling substandard products, uncontrolled secondary markets, and outright counterfeits. The good news is that many of these industries are tackling the problem head-on, and what they have learned can be applied to the control valve market as well. For example, the technology industry has created a group called the Alliance for Gray Market and Counterfeit Abetment (AGMA), which has developed guidelines for procurement and channel management recommendations to help identify, prosecute and reduce counterfeiting and diversion. Other such organizations exist in the pharmaceutical, automotive and aerospace industries.

As mentioned in this article, one of the key learnings across these industries is that sharing information with partners is critical to gaining control over your supply chain. This may be simply sharing sales data and repair records. Some industries where public safety is at risk from counterfeits are going so far as to uniquely identify production units and track the chain of custody of a unit throughout its life. This includes pharmaceutical and aerospace. Standards are being developed such as GS1 EPC Information Services (EPCIS) and SPEC2000 to help industry partners effectively share data. The GS1 standard is cross industry, and may also be applicable to the control valve industry.

The actions recommended in the article are in line with what is recommended for other industries. I would add to this, however, a channel management strategy and technology to ensure you have the capability to track and scorecard your suppliers against the terms of their agreements. This will give you the evidence you will need to take action if you find abnormalities. I would also emphasize the need to share more information across your partner network to gain greater visibility and control over your supply network.

(Photo by Michael Cornelius: http://www.flickr.com/photos/cornelii/ / CC BY-SA 2.0)