Quantum Baloney Spat: IBM Dismisses the GOOG over Supremacy
October 23, 2019
I am not holding my breath for quantum computers which do something semi-useful. Science club experiments are interesting but not something welcomed in Harrod’s Creek, Kentucky.
Not long ago a Googler announced that the GOOG was king and queen of the quantum hill. “IBM Upends Google’s Quantum Supremacy Claim” suggests that Google’s statement and subsequent removal of the document containing the claim was baloney. Hence, the quantum baloney spat.
The capitalist’s tool states:
Dario Gil, head of IBM quantum research, described the claim of quantum supremacy as indefensible and misleading. In a written statement, he said, “Quantum computers are not ‘supreme’ against classical computers because of a laboratory experiment designed to essentially implement one very specific quantum sampling procedure with no practical applications.”
Why believe IBM, the master of the Watson hot air balloon?
The answer:
Yesterday, IBM published a paper that backed up their claim. The paper points out that Google made an error in estimating that a classical computer would require 10,000 years to solve the problem.
There you go. Two self published papers. Real news.
Forbes included a useful point:
According to IBM’s blog, “an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity.” The blog post went on to say that 2.5 days is a worst-case estimate. Additional research could reduce the time even further. Google’s 10,000-year estimate was overstated because of an erroneous assumption. They believed that RAM requirements for running a quantum simulation of the problem in a classical computer would be prohibitively high. For that reason, Google used the time to offset the lack of space, hence their estimate of 10,000 years.
Cheese with that baloney?
Stephen E Arnold, October 23, 2019
IBM Says Hub-and-Spoke Model Will Make Watson a Winner.. What about a Bottleneck?
October 4, 2019
Business Insider amuses me. It recycles IBM marketing material and slaps a paywall on collateral.
One possible example is the write up titled “The Head of IBM’s Watson Walks Us Through the Exact Model Tech Leaders Can Use to Build Excitement Around Any AI project.”
Not the word “exact.” Sounds like a winner. I like the “any AI project”, but I would wager a copy of the IBM PC 704 RAID documentation that if the AI project relied on Amazon, Google, or Microsoft technology, IBM might want to rethink that “any AI project” assurance.
DarkCyber noted this statement which is allegedly spontaneous, unedited, and prose worthy of Cicero, a wordsmith alive when the Romans were using the hub-and-spoke system to organize the Empire as the Barbarians destroyed what Rome built:
One way to ensure projects advance is to appoint leaders within each respective business unit to help support the chief technology, data, or innovation officers, argues IBM’s Rob Thomas, a system he refers to as the “hub-and-spoke” model because the structure resembles one in which a central point is connected to several secondary points. “You need somebody that has a seat at the table at the top that’s saying it’s important to the company,” he told Business Insider. Organizations also “need somebody in those business units that owns this day-to-day, but is accountable back to the company strategy.”
Now the hub-and-spoke analogy is different from the distributed information and computing model. The reason is visible when it snows in Chicago. Flights are delayed because the hub doesn’t work. Contrast that the architecture used by some of the Eastern European pirate streaming video sites.
A node dies and an origin server communicates with a control server to bring the node back up. What is an origin server is taken down? The smart software activates a hot spare origin server and the system comes back up. Magic? Nope, just side deals with some ISPs with interesting perceptions of right and wrong.
What will save IBM? The “thousands of O’Hare flights are cancelled approach” or the distributed system which cyber criminals have embraced enthusiastically.
The fact is that the hub-and-spoke model is unlikely to breathe much life into IBM. The top down approach is conceptually useful because it explains some of the issues arising from Industrial Revolution management: Blue suit, red tie, white shirt, etc.
Not only is the IBM solution unusual, it is not special content. What proof? Check out:
Microsoft’s 2009 encomium to SQLServer called “Using SQL Server to Build a Hub-and-Spoke Enterprise Data Warehouse Architecture.”
New? Yeah, well. Convinced? Nope. One could combine Microsoft AI with SQLServer in a corporation. Will IBM support that?
Let’s ask Watson.
Stephen E Arnold, October 4, 2019
The Register Rings the Bell on IBM
September 29, 2019
DarkCyber noted “Analyze This: IBM Punts Off Algorithm Risk Biz.” The main idea is that IBM is exiting the financial risk business. Smart finance and associated analytics is a hot business sector. Even Amazon, the online bookstore, has some capabilities in this area. IBM? Not so much.
We noted this statement:
IBM originally purchased the analytics products from Toronto-based Algorithmics in 2011 for $387m.
The article explains that IBM wants to focus.
One interesting point in the write up struck DarkCyber as:
Kate Hanaghan, chief research officer at TechMarketView, said buying into new areas and selling off legacy ones are part of IBM’s turnaround plan. “The point is that IBM has to make some choices about where it should place its bets and sink its investment spend. Divestments are crucial and will without doubt continue – as will acquisitions.”
Sounds good, but this factoid explains the IBM problem:
IBM CEO, president and chairman Ginni Rometty took to the hot seat in 2011 when revenues came in at $106.9bn. At the end of 2018, revenues stood at $79.6bn.
There you go. Watson, what do you think?
Stephen E Arnold, September 29, 2019
IBM Cognos: A Mix of Marketing and Reality
July 15, 2019
Writing an online news story which does the James Bond “shaken, not stirred” approach, is difficult. A good example of partial success in blending marketing with reality is “IBM Battling to Change Perception of Cognos Analytics BI Platform.”
The source of the write up is an outfit called TechTarget, a publicly traded company. According to the firm’s Web site: The information services company:
is the online intersection of serious technology buyers, targeted technical content and technology providers worldwide. Our media, powered by TechTarget’s Activity Intelligence™ platform, redefines how technology buyers are viewed and engaged based on their active projects, specific technical priorities and business needs. With more than 100 technology specific websites, we provide technology marketers innovative media that delivers unmatched reach via custom advertising, branding and lead generation solutions all built on our extensive network of online and social media.
I noted the “custom advertising” phrase.
Now what’s Cognos?
Cognos dates from 1969, which makes the system 50 years old. In that span of time, analytics has emerged as the go-to technology for many firms. A working example is Google. Thus, it begs the question:
With a head start, why hasn’t Cognos become the financial big bang that Google has?
Ah, apples and oranges? Maybe not. But 50 years which represent the rocket ship revenues possible from analytics!
Now the write up:
The article begins with a surprising admission. I expected a rah rah, sis boom bah approach to IBM Cognos. Instead I read:
While some see IBM’s BI suite as being too complicated and expensive for citizen data scientists, the company is adding updates to try and attract the modern user.
From my point of view, IBM’s software too complicated. It is too expensive. IBM’s fixes are cosmetic, not structural. The people who are into Cognos are not modern.
Yep, that will boost sales.
In a half century, Cognos has been version updated 11 times. That one every six months, a bit below the constant stream of updates pumped into my system. In today’s world, I would characterize the approach as glacial.
IBM’s current pitch is that Cognos is just fine for medium sized businesses. That sounds good, particularly IBM’s statement:
“We released 11.0 in 2015 and spent a lot of time on the road at conferences banging the drum that this is not your grandfather’s Cognos, that this is the next iteration,” said Kevin McFaul, senior product manager of business analytics at IBM. “The capabilities are still there, but it was enhanced to target the new line of business users. Our competition went to market on ‘Cognos is too complex’ and we’ve done a lot of work to try and correct that perception.”
Okay, three years put in perspective IBM time versus the time cycle at an outfit like DataWalk. IBM deals in clumps of years; DataWalk operates in “right now” time. (I don’t work for DataWalk, but the company’s fast cycle approach to analytics is more in touch with what DarkCyber thinks believes is the future.)
Even Gartner, according to the write up, has pushed IBM Cognos down its wild and crazy subjective approach to identifying “with it” technology players. The write up quotes a Gartner wizard making a statement which will probably cause IBM to rethink its subscription to Gartner’s outstandingly subjective information services:
“IBM was a leader in traditional BI, but it took them a long time to respond to [changes in the market],” said Rita Sallam, a VP analyst at Gartner. “Cognos lost a lot of traction, but they’ve made promising investments in augmented intelligence, which we see as the next phase of BI.”
Yep, “was” and then “lost a lot of traction.” The the magical “but.” Sure enough, brilliant Gartner wizard, sure enough.
So what’s new in the IBM “Pimp My Ride” approach to modernizing a 50 year old classic former leader?
It is still a truck but a truck with Watson. Watson was supposed to be the billion dollar baby. Watson was supposed to help doctors, not create consternation. Watson was smart, not take person years of smart humans crafting content to make Watson smart.
Watson is now in Cognos.
Complexity definitely is enhanced with more complexity. The idea is very good for consulting revenues IF someone can find a client to buy into the complexity squared approach to analytics and THEN pay money to get the system to perform like a customized mine truck. Slick, huh?
The write up concludes with this bit of sales genius:
“Their challenges don’t stem from the product,” Sallam said. “It’s their go-to-market strategy, how to sell beyond their installed base, how to attract new buyers. They’ve put in place a plan to do that, but we’ll see how well they execute on those plans.”
What? The product is 50 years old and stuck in the mud, despite the giant tires and chrome rims. The marketing strategy is crazy. No kidding. Just read this TechTarget write up. IBM cannot sell.
Pretty amazing.
Net net: How many ads will IBM buy in TechTarget products and services?
One can use IBM Cognos Analytics with Watson to predict the number. But why bother?
Stephen E Arnold, July 15, 2019
Supercomputer Time!
June 18, 2019
DarkCyber noted “Top 500: China Has 219 of the World’s Fastest Supercomputers.” The list contains one interesting factoid:
Lenovo was the top vendor both by the number of systems and combined petaflops. Its machines achieved upwards of 302 petaflops, to be exact, followed by IBM’s with 207 petaflops.
A petaflop is the number of floating point operations per second a computer can deliver. A floating point operation is, according to the ever reliable Wikipedia:
arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is often found in systems which include very small and very large real numbers, which require fast processing times. A number is, in general, represented approximately to a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base; the base for the scaling is normally two, ten, or sixteen.
Just for fun, I was thinking about defining each of the assorted words and phrases, but then this blog post would be very big endian.
The killer fact, however, is that China has 219 of these puppies. Here’s the passage which garnered a yellow circle from my marker:
China surpassed the U.S. by total number of ranked supercomputers for the first time in Top500 rankings two years ago, 202 to 143. That trend accelerated in the intervening year; according to the Top500 fall 2018 report, the number of ranked U.S. supercomputers fell to 108 as China’s total climbed to 229.
DarkCyber thinks this is an important message. In fact, it seems more significant than IBM’s announcement, reported in “We’ve Made World’s Most Powerful Commercial Supercomputer.” The implication is that the faster computers are not commercial, right? The enthusiastic recyclers of IBM’s marketing information reported as “real” news:
The IBM-built Pangea III supercomputer has come online for French energy giant Total, bringing 31.7 petaflops of processing power and 76 petabytes of storage capacity. It’s now the world’s most powerful supercomputer outside government-owned systems.
But there is a bit of intrigue surrounding this “commercial” angle. I have done a small amount of work in France, and I learned that the difference between a French company and the French government is often difficult for a person from another country to understand. It’s not the tax policies, not the regulatory net, and not the quasi government committees and advisory groups. Nope, it’s the reality that those who go to the top French universities generally keep in touch. The approach is similar to India’s graduates of elite secondary schools. Therefore, I am not sure I am confident about the “outside government owned systems” statement in the write up.
Skepticism aside, the supercomputer you can buy from IBM is going to be outgunned by 10 other systems.
The other interesting allegedly true factoid in the write up is a node which includes the French, of course, IBM, and the ever lovable Google. IBM and Google, while not like graduates of GEM (Groupe des écoles des mine). Google’s hook up with IBM warrants some consideration.
Stephen E Arnold, June 18, 2019
IBM Hyperledger: A Doubter
June 8, 2019
Though the IBM-backed open-source project Hyperledger has been prominent on the blockchain scene since 2016, The Next Web declares, “IBM’s Hyperledger Isn’t a Real Blockchain—Here’s Why.” Kadena president, and writer, Stuart Popejoy tells us:
“A blockchain is a decentralized and distributed database, an immutable ledger of events or transactions where truth is determined by a consensus mechanism — such as participants voting to agree on what gets written — so that no central authority arbitrates what is true. IBM’s definition of blockchain captures the distributed and immutable elements of blockchain but conveniently leaves out decentralized consensus — that’s because IBM Hyperledger Fabric doesn’t require a true consensus mechanism at all.
We noted this statement as well:
“Instead, it suggests using an ‘ordering service’ called Kafka, but without enforced, democratized, cryptographically-secure voting between participants, you can’t really prove whether an agent tampers with the ledger. In effect, IBM’s ‘blockchain’ is nothing more than a glorified time-stamped list of entries. IBM’s architecture exposes numerous potential vulnerabilities that require a very small amount of malicious coordination. For instance, IBM introduces public-key cryptography ‘inside the network’ with validator signatures, which fundamentally invalidates the proven security model of Bitcoin and other real blockchains, where the network can never intermediate a user’s externally-provided public key signature.”
Then there are IBM’s approaches to architecture, security flaws, and smart contracts to consider, as well as misleading performance numbers. See the article for details on each of those criticisms. Popejoy concludes with the prediction that better blockchains are bound to be developed, alongside a more positive approach to technology in general, across society. Such optimism is refreshing. Watson, what do you think?
Cynthia Murrell, June 8, 2019
IBM Watson Studio: Watson, Will It Generate Big Revenue?
May 22, 2019
Despite its troubles, Watson lives on. ZDNet reports, “IBM Updates Watson Studio.” The AI-model-building platform relies on Watson’s famous machine learning technology and can deploy its models to either onsite data centers or in the cloud. Writer Stephanie Condon specifies:
“Watson Studio 2.0 includes a range of new features, starting with data preparation and exploration. For data exploration, IBM is adding 43 data connectors like Dropbox, Salesforce, Tableau and Looker. It’s also adding an Asset Browser experience to navigate through Schemas, Tables and Objects. For refining data, there are new tools for previewing and visualizing data. For running analytics where your data lives and to leverage existing compute, IBM has enhanced Watson Studio’s integrations with Hadoop Distributions (CDH and HDP). Watson Studio 2.0 also now includes built-in batch and evaluation job management for Python/R scripts, SPSS streams and Data Refinery Flows. There’s a new collaborative interface, similar to Slack, to the Jupyter Notebook integration. Version 2.0 also lets scientists import open source packages or libraries.”
We’re also told support for your major GIT frameworks: Github, Github enterprise, Bitbucket and Bitbucket Server. This release is in line with IBM’s goal, stated earlier this year, of making all of its Watson tech available to multiple cloud platforms. These applications, dev tools, and models now make their home under the IBM Cloud Private for Data. Back to the question in the headline. The answer, stakeholders hope, is “Yes.” For those with less optimism, the answer may be, “Probably not.”
Cynthia Murrell, May 22, 2019
IBM Hyperledger: More Than a Blockchain or Less?
May 17, 2019
Though the IBM-backed open-source project Hyperledger has been prominent on the blockchain scene since 2016, The Next Web declares, “IBM’s Hyperledger Isn’t a Real Blockchain—Here’s Why.” Kadena president, and writer, Stuart Popejoy tells us:
“A blockchain is a decentralized and distributed database, an immutable ledger of events or transactions where truth is determined by a consensus mechanism — such as participants voting to agree on what gets written — so that no central authority arbitrates what is true. IBM’s definition of blockchain captures the distributed and immutable elements of blockchain but conveniently leaves out decentralized consensus — that’s because IBM Hyperledger Fabric doesn’t require a true consensus mechanism at all.
We noted this statement as well:
“Instead, it suggests using an ‘ordering service’ called Kafka, but without enforced, democratized, cryptographically-secure voting between participants, you can’t really prove whether an agent tampers with the ledger. In effect, IBM’s ‘blockchain’ is nothing more than a glorified time-stamped list of entries. IBM’s architecture exposes numerous potential vulnerabilities that require a very small amount of malicious coordination. For instance, IBM introduces public-key cryptography ‘inside the network’ with validator signatures, which fundamentally invalidates the proven security model of Bitcoin and other real blockchains, where the network can never intermediate a user’s externally-provided public key signature.”
Then there are IBM’s approaches to architecture, security flaws, and smart contracts to consider, as well as misleading performance numbers. See the article for details on each of those criticisms. Popejoy concludes with the prediction that better blockchains are bound to be developed, alongside a more positive approach to technology in general, across society.
Cynthia Murrell, May 17, 2019
IBM: Watson, Are the Shrimp Ready to Eat?
May 8, 2019
I wish I could say I was making up the information in “IBM Is Putting Sustainable Shrimp on the Blockchain.” The idea is a mostly respectable one: Save the planet and help provide useful information about food. But shrimp on the barbeque? Yes, IBM.
The write up explains:
The Sustainable Shrimp Partnership (SSP) yesterday announced that its joined IBM‘s Food Trust ecosystem which claims to use blockchain to provide greater transparency over where your food comes from. The SSP says it will be using the IBM blockchain to track the journey of Ecuadorian farmed shrimp from birth to barbecue.
Are the data in the blockchain accurate? Not sure.
The French cheese outfit Carrefour has cheese on the blockchain. Are those data accurate? Mais oui. Carrefour is French.
Stephen E Arnold, May 8, 2019
Thomson Reuters: Whither Palantir Technologies
May 6, 2019
When I was working on a profile of Palantir Technologies for a client a couple of years ago, I came across a reference to Thomson Reuters’ use of Palantir Technologies smart system. News of the deal surfaced in a 2010 news release issued on Market Wired, but like many documents in the “new” approach to Web indexing, the content is a goner.
My memory isn’t what it used to be, but I recall that the application was called QA Studio. The idea obviously was to allow a person to ask a question using the “intuitive user interface” which the TR and Palantir team created to generate revenue magic. The goal was to swat the pesky FactSet and Bloomberg offerings as well as the legion of wanna-be analytics vendors chasing the Wall Street wizards.
Here’s a document form my files showing a bit of the PR lingo and the interface to the TR Palantir service:
I am not sure what happened to this product nor the relationship with the Palantir outfit.
I assume that TR wants more smart software, not just software which creates more work for the already overburdened MBAs planning the future of the economic world.
One of the DarkCyber researchers spotted this news release, which may suggest that TR is looking to the developer of OS/2 (once used by TR as I recall) for smart software: “IBM, Thomson Reuters Introduce Powerful New AI and Data Combination to Simplify How Financial Institutions Tackle Regulatory Compliance Challenges.”
The news release informed me that:
IBM and Thomson Reuters Regulatory Intelligence will now offer financial institutions access to a RegTech solution delivered from the IBM Cloud that features real-time financial services data from thousands of content sources. Backed by the power of AI and domain knowledge of Promontory Financial Group, the collaboration will enable risk and compliance professionals to keep pace with regulatory changes, manage risk and reduce the overall cost of compliance.
I learned:
Thomson Reuters and IBM have been collaborating on AI and data intelligence since 2015, bringing together expertise and technology to solve industry-specific problems in areas such as healthcare and data privacy. Today’s announcement represents another step forward in helping businesses combat their most pressing regulatory challenges.
The most interesting word in the news release is “holistic.” I haven’t encountered that since “synergy” became a thing. Here’s what the TR IBM news release offered:
Featuring an updated user experience to allow for increased engagement, IBM OpenPages with Watson 8.0 transforms the way risk and compliance professionals work. By providing a holistic view of risk and regulatory responsibilities, OpenPages helps compliance professionals actively participate in risk management as a part of their day-to-day activity. In addition to integrating Thomson Reuters Regulatory Intelligence, IBM OpenPages with Watson incorporates the expertise of Promontory Financial Group to help users of OpenPages create libraries of relevant regulatory requirements, map them to their internal framework and evaluate their impact to the business.
Yep, OpenPages. What is this? Well, it is Watson, but that doesn’t help me. Watson is more of a combo consulting-licensing thing. In this implementation, OpenPages reduces risk and makes “governance” better with AI and advanced analytics.
Analytics? That was the purpose of Palantir Technologies’ solution.
Let’s step back. What is the news release saying? These thoughts zoomed through my now confused brain:
- TR licensed Palantir’s system which delivers some of the most advanced analytics offered based on my understanding of the platform. Either TR can’t make Palantir do what TR wants to generate revenue or Palantir’s technology is falling below the TR standard for excellence.
- TR needs a partner which can generate commercial sales. IBM is supposed to be a sales powerhouse, but IBM’s financial performance has been dicey for years. Palantir, therefore, may be underperforming, and IBM’s approach is better. What?
- IBM’s Watson TR solution works better than IBM’s forays into medicine, enterprise search, cloud technology for certain government entities, and a handful of other market sectors. What?
To sum up, I am not sure which company is the winner in this TR IBM deal? One hypothesis is that both TR and IBM hope to pull a revenue bunny from the magic hat worn by ageing companies.
The unintentional cold shoulder to Palantir may not be a signal about that firm. But with IPO talk circulating in some circles, Palantir certainly wants outfits like TR to emit positive vibes.
Interesting stuff this analytics game. I suppose one must take a “holistic” view. Will there be “synergy” too?
Stephen E Arnold, May 6, 2018