Frisky Language Aside: An Important Cloud Message

June 7, 2019

I don’t know much about Digital Ocean, droplets, Checkly, Raisup, “”or the other Fancy Dan technologies mentioned in the write up. I usually ignore articles with unpleasant language. I worked through this write up because in Myrtle Beach at the policeware conference, there was quite a bit of chatter about the move to the cloud by law enforcement entities of all shapes and sizes.

The title of the article is “Why the recent “Digital Ocean Killed My Company” incident Scares the [curse word] Out of Me.” The link to the story is here. The main idea is that a cloud provider relied on a monitoring system. The customer found that his account was killed. After some flim flam, the account was restored. There are other details, but the value of the write up resides in these points, often buried in the description of who shot John or Jane in the back on a stormy night near the digital corral.

ITEM 1: “I’m scared I could be hit by an out-of-control abuse algorithm and a broken customer service process. And I have zero Twitter clout or any other online notoriety.” DarkCyber’s comment: Yep, get used to the reality of engineers who either don’t know, don’t care, or who are trying to find a better job.

ITEM 2: “you can’t just shrug off basic service reliability and availability planning just because you’re a (small) startup. Consequently, that is the whole reason you are using a cloud service.” DarkCyber’s comment: Sorry, no free lunch.

ITEM 3: “You only have to be wrong once.” DarkCyber’s comment: Some folks are used to getting gold stars for trying hard. Nope, gold stars go to those who win the race, are the top student in math, and those who don’t make mistakes. Life is cruel for those who make errors.

ITEM 4: Do have some backups. DarkCyber’s comment. That’s good advice.

Stephen E Arnold, June 7, 2019

Centralizing and Concentrating: Works Great Until It Does Not

April 1, 2019

No joke or joke? Let’s assume the story is true.

US airlines are proving that centralizing and concentrating online services works great until the system fails. I read “Computer Outage Affecting Major US Airlines including Southwest, Delta and United Causes Hundreds of Flight Delays Nationwide.” (I first saw the news in a UK stream from the Daily Mail, a British newspaper.) As I write this at 910 am US Eastern (April 1, 2019), the story is now appearing in other feeds. The problem appears to be one with software called Aerodata. By 840 am US Eastern time, more than 700 flights were affected.

What seems to be lousy systems administration, engineering, or business processes have made April 1, 2019, into unpleasant anecdotes, not frothy jokes.

Aerodata’s Web site cheerfully reports my public IP address which, not surprisingly, is not what my IP address is. The Web site requires Flash, a super unsecure software in my opinion. I was not able to locate current news from the company. I noticed that VMWare mentions that the company uses VSAN to power a modern software defined data center.  You can read the marketing inspired explanation at this link or you could at 917 am US Eastern on April 1, 2019.

According the a Chicago NBC outlet, all is well again. You can get this take at this link.

What happens if a cyber attack takes down a concentrated service?

Stephen E Arnold, April 1, 2019

Juicy Target: Big Cloudy Agglomerations of Virtual and Tangible Gizmos

March 9, 2019

Last week I had a call about the vulnerability of industrial facilities. The new approach is to push certain control, monitoring, and administrative systems to the cloud. The idea is that smart milling machines, welders, and similar expensive equipment can push their data to the “cloud.” The magic in the cloud then rolls up the data, giving the manufacturing outfit a big picture view of the individual machines in multiple locations. Need a human to make sure the industrial robots are working happily? Nope. Just look at a “dashboard.” If a deity were into running a chemical plant or making automobiles, the approach is common sense.

I read “Citrix Hacked and Didn’t Know Until FBI Alert.” The FBI is capable, but each week I receive email from companies which perform autonomous, proactive monitoring to identify, predict, and prevent breaches.

The write up points out

The firm attributed the attack to an Iranian group called “IRIDIUM” and says it made off with “at least 6 terabytes of sensitive data stored in the Citrix enterprise network, including e-mail correspondence, files in network shares and other services used for project management and procurement.”

The article buries this statement deep in the report:

The breach disclosure comes just three days after Citrix updated its SD-WAN offering to help enterprises to administer user-centric policies and connect branch employees to applications in the cloud with greater security and reliability. The product is intended to simplify branch networking by converging WAN edge capabilities and defining security zones to apply different policies for different users.

What’s the implication?

Forget Go to My PC vulnerabilities. Old news. The bad actors may have the opportunity to derail certain industrial and manufacturing processes. What happens when a chemical plant gets the wrong instructions.

Remember the Port of Texas City mishap? A tragic failure. Accidental.

But Citrix style breaches combined with “we did not know” may presage intentional actions in the future.

Yep, cloudy with a chance of pain.

Stephen E Arnold, March 9, 2019

Fragmented Data: Still a Problem?

January 28, 2019

Digital transitions are a major shift for organizations. The shift includes new technology and better ways to serve clients, but it also includes massive amounts of data. All organizations with a successful digital implementation rely on data. Too much data, however, can hinder organizations’ performance. The IT Pro Portal explains how data and something called mass data fragmentation is a major issue in the article, “What Is Mass Data Fragmentation, And What Are IT Leaders So Worried About It?”

The biggest question is: what exactly is mass data fragmentation? I learned:

“We believe one of the major culprits is a phenomenon called mass data fragmentation. This is essentially just a technical way of saying, ’data that is siloed, scattered and copied all over the place’ leading to an incomplete view of the data and an inability to extract real value from it. Most of the data in question is what’s called secondary data: data sets used for backups, archives, object stores, file shares, test and development, and analytics. Secondary data makes up the vast majority of an organization’s data (approximately 80 per cent).”

The article compares the secondary data to an iceberg, most of it is hidden beneath the surface. The poor visibility leads to compliance and vulnerability risks. In other words, security issues that put the entire organization at risk. Most organizations, however, view their secondary data as a storage bill, compliance risk (at least that is good), and a giant headache.

When surveyed about the amount of secondary data they have, it was discovered that organizations had multiple copies of the same data spread over the cloud and on premise locations. IT teams are expected to manage the secondary data across all the locations, but without the right tools and technology the task is unending, unmanageable, and the root of more problems.

If organizations managed their mass data fragmentation efficiently it would increase their bottom line, reduce costs, and reduce security risks. With more access points to sensitive data and they are not secure, it increases the risk of hacking and information being stolen.

Whitney Grace, January 28, 2019

Amazon Opens a New Front in the Cloud Wars

November 30, 2018

A Microsoft “expert” has explained why Azure, the Microsoft cloud service, why the Azure cloud failed Thanksgiving week. Like the explanation for the neutralizing of some customers’ Windows 10 machines, three problems arose. You can work through the explanation at this link, but you may, like me, remain skeptical about Microsoft’s ability to keep its cloud sunny. Key point: Microsoft apologizes for its mistakes. Yada yada yada.

At about the same time, Amazon announced that its cloud service uses its own custom designed Arm server processors. How will Microsoft compete with a service that is not without flaws but promises lower costs? The GeekWire write up states:

Vice president of infrastructure Peter DeSantis introduced the AWS Graviton Processor Monday night, adding a third chip option for cloud customers alongside instances that use processors from Intel and AMD. The company did not provide a lot of details about the processor itself, but DeSantis said that it was designed for scale-out workloads that benefit from a lot of servers chipping away at a problem.

From our vantage point in Harrod’s Creek, the Amazon approach seems useful for certain types of data mining and data analytics tasks. Could these be the type of tasks which are common when using systems like Palantir Gotham’s?

The key point, however, is “low cost.”

But the important strategic move is that Amazon is now in the chip business. What other hardware are the folks at the ecommerce site exploring? Amazon network hardware?

Microsoft makes fuzzy tablet-laptops, right?

Stephen E Arnold, November 30, 2018

Cloudtenna for Combined Cloud and Local Search

November 16, 2018

Here’s a claim we’ve heard before: ZDNet declares, “Find a File Anywhere: Cloudtenna Targets Local and Cloud File Search.” Writer Robin Harris begins by describing the problem this upgrade addresses—an increasing number of cloud storage locations, combined with on-premise servers, make good search solutions even more challenging to build. Startup Cloudtenna is now expanding their cloud search engine, DirectSearch. Harris writes:

“The new product adds a machine learning platform that find files across disparate platforms, including Dropbox, Box, Microsoft OneDrive, Google Drive, Outlook, Gmail, Slack, Atlassian JIRA and Confluence, and local file servers. You can search on name, sender, date, file type, keyword, content, and other attributes regardless of where the file is located. That’s a lot, but it’s not the hard part. Nor is respecting file permissions, meaning that users can’t access files they aren’t supposed too. The hard part is doing this and delivering sub-second response times, even when thousands of users are searching across billions of files stored on dozens of repositories.”

Machine learning and a lightweight crawler (that collects metadata instead of files themselves) are strengths of the new platform. The company was understandably tight-lipped about the tech behind their cloudy search prowess, but they did release this tidbit:

“It uses real-time binding to build its file index and then performs consistency checks to capture deltas, such as a security change or a deleted file. File deduplication and ACL crunching reduces data required by the index, significantly reducing storage costs and requirements.”

A new OEM partner program helps users embed DirectSearch into existing platforms, and Cloudtenna offers a free, three-month account as a trial for potential users. Based in Sunnyvale, California, the company was founded in 2013.

Cynthia Murrell, November 15, 2018

Microsoft: Is the Master of Windows 10 Updates Really Beating Amazon in the Cloud?

November 7, 2018

How about that October 2018 Windows update? Does that give you confidence in Microsoft’s technical acumen? What? You are telling me that it is apples and oranges. Okay. Everyone is entitled to an opinion.

After reading a former Oracle executive’s analysis of Microsoft and Amazon cloud revenue, I suppose one could make that argument. I am not sure I buy the Forbes argument in “#1 Microsoft Beats Amazon In 12-Month Cloud Revenue, $26.7 Billion To $23.4 Billion; IBM Third.” The write up makes clear that the analyst is an award winning PR type at SAP and then a “communications officer” at Oracle before finding his true calling at Evans Strategic Communications LLC.

Is Microsoft #1?

From my point of view in lovely Harrod’s Creek, Kentucky, there are several items of information omitted from the Forbes’ analysis; for example:

How does Microsoft calculate its cloud revenue? Does the number include enforced cloud services?

What part of Microsoft’s cloud revenue is generated by accounting methods such as reallocating revenue and thinking really hard about attributing certain revenue to the cloud line items?

Using these accounting methods, how has Microsoft’s cloud revenue tracked over the last 12 quarters?

Analyses require more than accepting the rolled figure. But that’s in rural Kentucky, the rules may be different for PR experts in a real technology hotbed.

Now Amazon is no Mr. Clean when it comes to reporting its financial data. For years, AWS revenue was expressed as weird stuff like the number of things a complex network of computers does to complete work. Now Amazon generally reveals some numbers, and I assume these can be tweaked by figuring in some of the Amazon ecommerce magic into the cloud.

The larger question for me is:

Why is a former Oracle guy writing a pro Microsoft and pro IBM story about the cloud race among three firms?

The write up included this bit of “let’s not talk about the October update” offered up by Microsoft’s big dog:

CEO Satya Nadella offered this perspective on the centerpiece of the Microsoft cloud: “Azure is the only hyperscale cloud that extends to the edge across identity, data, application platform and security and management. We introduced 100 new Azure capabilities this quarter alone, focused on both existing workloads like security and new workloads like IoT and Edge AI.”

Yep, I believe this. Every. Word.

Perhaps nailing down the inclusions in the gross cloud revenue numbers would be a useful first step? Would it be helpful to learn why an Oracle PR pro is dissing Amazon?

The capitalist tool’s presentation of this analysis might have caused Malcolm Forbes to crash his motorcycle on the way to brunch in Manhattan on Sunday morning.

Quite an “analysis.”

Stephen E. Arnold, November  7, 2018

The Decentralized Web

August 16, 2018

The idea is a good one. The Web is not delivered from a handful of centralized companies. On the other hand, decentralization has not achieved the success many have predicted.

We read “What Do You Believe Now That You Didn’t Five Years Ago.” We also noted “Tron to Become the Google for Blockchain Industry? Taking Slow Steps to Achieve Its Aim to ‘Decentralize the Web’”. Both of these articles are interesting.

The “What Do You Believe” discussion makes a good point:

Today, servers aren’t even cattle, servers are insects connected over fast networks. Centralization is not only possible now, it’s economical, it’s practical, it’s controllable, it’s governable, it’s economies of scalable, it’s reliable, it’s walled gardenable, it’s monetizable, it’s affordable, it’s performance tunable, it’s scalable, it’s cacheable, it’s securable, it’s defensible, it’s brandable, it’s ownable, it’s right to be forgetable, it’s fast releasable, it’s debuggable, it’s auditable, it’s copyright checkable, it’s GDPRable, it’s safe for China searchable, it’s machine learnable, it’s monitorable, it’s spam filterable, it’s value addable.

If true, decentralization is unlikely because of one major “able”: Economical.

The “Tron” article makes this point:

Tron Foundation aims to use BlockChain.Org aims to observe and keep a track of all the information on social media, web, and other existing search engines. The information will be in all possible formats such as regular text, videos, pdf and other structured data.

Our question: Are these different visions or the same goal: A central point?

Stephen E Arnold, August 16, 2018

Google Contributes to the History of Kubernetes

August 15, 2018

It is time for a history lesson; the Google Cloud Platform Blog proffers, “From Google to the World: The Kubernetes Origin Story.” Anyone curious about the origins of the open source management system may want to check it out. The post begins with a description of the 2013 meeting at which the Kubernetes co-founders pitched their idea to executive Urs Holzle, which only happened because one of those founders (and author of the post) Craig McLuckie found himself on a shuttle with the company’s then-VP of Cloud Eric Brewer. To conclude the post, McLuckie notes Kubernetes is now deployed in thousands of organizations and has benefitted from some 237 person-years’ worth of coding put in by some 830 contributors. In between we find a little Star Trek-related trivia; McLuckie writes:

“In keeping with the Borg theme, we named it Project Seven of Nine. (Side note: in an homage to the original name, this is also why the Kubernetes logo has seven sides.) We wanted to build something that incorporated everything we had learned about container management at Google through the design and deployment of Borg and its successor, Omega — all combined with an elegant, simple and easy-to-use UI. In three months, we had a prototype that was ready to share.

We also noted this statement:

“We always believed that open-sourcing Kubernetes was the right way to go, bringing many benefits to the project. For one, feedback loops were essentially instantaneous — if there was a problem or something didn’t work quite right, we knew about it immediately. But most importantly, we were able to work with lots of great engineers, many of whom really understood the needs of businesses who would benefit from deploying containers (have a look at the Kubernetes blog for perspectives from some of the early contributors).”

McLuckie includes links for potential users to explore the Kubernetes Engine and, perhaps, begin a two-month free trial. Finally, he suggests we navigate to his Kubernetes Origins podcast hosted by Software Engineering Daily for more information.

History is good.

Cynthia Murrell, August 15, 2018

Amazon Clarification on Network Switches

July 19, 2018

I read an exclusive on Marketwatch. (I did not know it was “real” journalism.) The story is “Exclusive: Amazon Denies It Will Challenge Cisco with Switch Sales.” The story’s main point struck me as:

Amazon.com Inc.’s top cloud-computing executive has officially denied that Amazon Web Services plans to start selling network switches to other businesses, after a report last week claiming that move was in the works damaged stocks of Cisco Systems Inc. and other major networking companies.

I think I understand.

Amazon may be building switches with Amazon Web Services and maybe its streaming data marketplace baked in. But these switches will not be old to “other businesses.”

Such a switch would add some functionality to Amazon’s own infrastructure. I wonder if these switches, assuming they exist, would add some beef to Amazon’s government client activities. For example, some lawful intercept activities take place at network tiers where there are some quite versatile switches.

The write up adds:

Amazon would not comment on whether it is creating its own networking equipment, just that it did not plan to sell such equipment to other businesses.

If Amazon wins more US government cloud and AWS centric work, certification of these devices eliminates possible questions about backdoors or phone home functions in gear sourced from other companies.

To sum up, Amazon does not deny it is building switches (whatever that term includes).

Worth watching in the context of the on going dust up between Oracle’s data marketplace and Amazon’s designs on building a new source of revenue with its marketplace innovations.

Stephen E Arnold, July 19, 2018

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta