Scale Fail: Define Scale for Tech Giants, Not Residents of Never Never Land

December 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Scale Is a Trap.” The essay presents an interesting point of view, scale from the viewpoint of a resident of Never Never Land. The write up states:

But I’m pretty convinced the reason these sites [Vice, Buzzfeed, and other media outfits] have struggled to meet the moment is because the model under which they were built — eyeballs at all cost, built for social media and Google search results — is no longer functional. We can blame a lot of things for this, such as brand safety and having to work through perhaps the most aggressive commercial gatekeepers that the world has ever seen. But I think the truth is, after seeing how well it worked for the tech industry, we made a bet on scale — and then watched that bet fail over and over again.

The problem is that the focus is on media companies designed to surf on the free megaphones like Twitter and the money from Google’s pre-threat ad programs. 

However, knowledge is tough to scale. The firms which can convert knowledge into what William James called “cash value” charge for professional services. Some content is free like wild and crazy white papers. But the “good stuff” is for paying clients.

Outfits which want to find enough subscribers who will pay the necessary money to read articles is a difficult business to scale. I find it interesting that Substack is accepting some content sure to attract some interesting readers. How much will these folks pay. Maybe a lot?

But scale in information is not what many clever writers or traditional publishers and authors can do. What happens when a person writes a best seller. The publisher demands more books and the result? Subsequent books which are not what the original was. 

Whom does scale serve? Scale delivers power and payoff to the organizations which can develop products and services that sell to a large number of people who want a deal. Scale at a blue chip consulting firm means selling to the biggest firms and the organizations with the deepest products. 

But the scale of a McKinsey-type firm is different from the scale at an outfit like Microsoft or Google.

What is the definition of scale for a big outfit? The way I would explain what the technology firms mean when scale is kicked around at an artificial intelligence conference is “big money, big infrastructure, big services, and big brains.” By definition, individuals and smaller firms cannot deliver.

Thus, the notion of appropriate scale means what the cited essay calls a “niche.” The problems and challenges include:

  • Getting the cash to find, cultivate, and grow people who will pay enough to keep the knowledge enterprise afloat
  • Finding other people to create the knowledge value
  • Protecting the idea space from carpetbaggers
  • Remaining relevant because knowledge has a shelf life, and it takes time to grow knowledge or acquire new knowledge.

To sum up, the essay is more about how journalists are going to have to adapt to a changing world. The problem is that scale is a characteristic of the old school publishing outfits which have been ill-suited to the stress of adapting to a rapidly changing world.

Writers are not blue chip consultants. Many just think they are.

Stephen E Arnold, December 29, 2023

AI Silly Putty: Squishes Easily, Impossible to Remove from Hair

December 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I like happy information. I navigated to “Meta’s Chief AI Scientist Says Terrorists and Rogue States Aren’t Going to Take Over the World with Open Source AI.” Happy information. Terrorists and the Axis of Evil outfits are just going to chug along. Open source AI is not going to give these folks a super weapon. I learned from the write up that the trustworthy outfit Zuckbook has a Big Wizard in artificial intelligence. That individual provided some cheerful words of wisdom for me. Here’s an example:

It won’t be easy for terrorists to takeover the world with open-source AI.

Obviously there’s a caveat:

they’d need a lot money and resources just to pull it off.

That’s my happy thought for the day.

image

“Wow, getting this free silly putty out of your hair is tough,” says the scout mistress. The little scout asks, “Is this similar to coping with open source artificial intelligence software?” Thanks, MSFT Copilot. After a number of weird results, you spit out one that is good enough.

Then I read “China’s Main Intel Agency Has Reportedly Developed An AI System To Track US Spies.” Oh, oh. Unhappy AI information. China, I assume, has the open source AI software. It probably has in its 1.4 billion population a handful of AI wizards comparable to the Zuckbook’s line up. Plus, despite economic headwinds, China has money.

The write up reports:

The CIA and China’s Ministry of State Security (MSS) are toe to toe in a tense battle to beat one another’s intelligence capabilities that are increasingly dependent on advanced technology… , the NYT reported, citing U.S. officials and a person with knowledge of a transaction with contracting firms that apparently helped build the AI system. But, the MSS has an edge with an AI-based system that can create files near-instantaneously on targets around the world complete with behavior analyses and detailed information allowing Beijing to identify connections and vulnerabilities of potential targets, internal meeting notes among MSS officials showed.

Not so happy.

Several observations:

  1. The smart software is a cat out of the bag
  2. There are intelligent people who are not pals of the US who can and will use available tools to create issues for a perceived adversary
  3. The AI technology is like silly putty: Easy to get, free or cheap, and tough to get out of someone’s hair.

What’s the deal with silly putty? Cheap, easy, and tough to remove from hair, carpet, and seat upholstery. Just like open source AI software in the hands of possibly questionable actors. How are those government guidelines working?

Stephen E Arnold, December 29, 2023

The American Way: Loose the Legal Eagles! AI, Gray Lady, AI.

December 29, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

With the demands of the holidays, I have been remiss in commenting upon the festering legal sores plaguing the “real” news outfits. Advertising is tough to sell. Readers want some stories, not every story. Subscribers churn. The dead tree version of “real” news turn yellow in the windows of the shrinking number of bodegas, delis, and coffee shops interested in losing floor space to “real” news displays.

image

A youthful senior manager enters Dante’s fifth circle of Hades, the Flaming Legal Eagles Nest. Beelzebub wishes the “real” news professional good luck. Thanks, MSFT Copilot, I encountered no warnings when I used the word “Dante.” Good enough.

Google may be coming out of the dog training school with some slightly improved behavior. The leash does not connect to a shock collar, but maybe the courts will provide curtail some of the firm’s more interesting behaviors. The Zuckbook and X.com are news shy. But the smart software outfits are ripping the heart out of “real” news. That hurts, and someone is going to pay.

Enter the legal eagles. The target is AI or smart software companies. The legal eagles says, “AI, gray lady, AI.”

How do I know? Navigate to “New York Times Sues OpenAI, Microsoft over Millions of Articles Used to Train ChatGPT.” The write up reports:

The New York Times has sued Microsoft and OpenAI, claiming the duo infringed the newspaper’s copyright by using its articles without permission to build ChatGPT and similar models. It is the first major American media outfit to drag the tech pair to court over the use of stories in training data.

The article points out:

However, to drive traffic to its site, the NYT also permits search engines to access and index its content. "Inherent in this value exchange is the idea that the search engines will direct users to The Times’s own websites and mobile applications, rather than exploit The Times’s content to keep users within their own search ecosystem." The Times added it has never permitted anyone – including Microsoft and OpenAI – to use its content for generative AI purposes. And therein lies the rub. According to the paper, it contacted Microsoft and OpenAI in April 2023 to deal with the issue amicably. It stated bluntly: "These efforts have not produced a resolution."

I think this means that the NYT used online search services to generate visibility, access, and revenue. However, it did not expect, understand, or consider that when a system indexes content, that content is used for other search services. Am I right? A doorway works two ways. The NYT wants it to work one way only. I may be off base, but the NYT is aggrieved because it did not understand the direction of AI research which has been chugging along for 50 years.

What do smart systems require? Information. Where do companies get content? From online sources accessible via a crawler. How long has this practice been chugging along? The early 1990s, even earlier if one considers text and command line only systems. Plus the NYT tried its own online service and failed. Then it hooked up with LexisNexis, only to pull out of the deal because the “real” news was worth more than LexisNexis would pay. Then the NYT spun up its own indexing service. Next the NYT dabbled in another online service. Plus the outfit acquired About.com. (Where did those writers get that content?” I know the answer, but does the Gray Lady remember?)

Now with the success of another generation of software which the Gray Lady overlooked, did not understand, or blew off because it was dealing with high school management methods in its newsroom — now the Gray Lady has let loose the legal eagles.

What do I make of the NYT and online? Here are the conclusions I reached working on the Business Dateline database and then as an advisor to one of the NYT’s efforts to distribute the “real” news to hotels and steam ships via facsimile:

  1. Newspapers are not very good at software. Hey, those Linotype machines were killers, but the XyWrite software and subsequent online efforts have demonstrated remarkable ways to spend money and progress slowly.
  2. The smart software crowd is not in touch with the thought processes of those in senior management positions in publishing. When the groups try to find common ground, arguments over who pays for lunch are more common than a deal.
  3. Legal disputes are expensive. Many of those engaged reach some type of deal before letting a judge or a jury decide which side is the winner. Perhaps the NYT is confident that a jury of its peers will find the evil AI outfits guilty of a range of heinous crimes. But maybe not? Is the NYT a risk taker? Who knows. But the NYT will pay some hefty legal bills as it rushes to do battle.

Net net: I find the NYT’s efforts following a basic game plan. Ask for money. Learn that the money offered is less than the value the NYT slaps on its “real” news. The smart software outfit does what it has been doing. The NYT takes legal action. The lawyer engage. As the fees stack up, the idea that a deal is needed makes sense.

The NYT will do a deal, declare victory, and go back to creating “real” news. Sigh. Why? Microsoft has more money and can tie up the matter in court until Hell freezes over in my opinion. If the Gray Lady prevails, chalk up a win. But the losers can just up their cash offer, and the Gray Lady will smile a happy smile.

Stephen E Arnold, December 29, 2023

AI Risk: Are We Watching Where We Are Going?

December 27, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

To brighten your New Year, navigate to “Why We Need to Fear the Risk of AI Model Collapse.” I love those words: Fear, risk, and collapse. I noted this passage in the write up:

When an AI lives off a diet of AI-flavored content, the quality and diversity is likely to decrease over time.

I think the idea of marrying one’s first cousin or training an AI model on AI-generated content is a bad idea. I don’t really know, but I find the idea interesting. The write up continues:

image

Is this model at risk of encountering a problem? Looks like it to me. Thanks, MSFT Copilot. Good enough. Falling off the I beam was a non-starter, so we have a more tame cartoon.

Model collapse happens when generative AI becomes unstable, wholly unreliable or simply ceases to function. This occurs when generative models are trained on AI-generated content – or “synthetic data” – instead of human-generated data. As time goes on, “models begin to lose information about the less common but still important aspects of the data, producing less diverse outputs.”

I think this passage echoes some of my team’s thoughts about the SAIL Snorkel method. Googzilla needs a snorkel when it does data dives in some situations. The company often deletes data until a legal proceeding reveals what’s under the company’s expensive, smooth, sleek, true blue, gold trimmed kimonos

The write up continues:

There have already been discussions and research on perceived problems with ChatGPT, particularly how its ability to write code may be getting worse rather than better. This could be down to the fact that the AI is trained on data from sources such as Stack Overflow, and users have been contributing to the programming forum using answers sourced in ChatGPT. Stack Overflow has now banned using generative AIs in questions and answers on its site.

The essay explains a couple of ways to remediate the problem. (I like fairy tales.) The first is to use data that comes from “reliable sources.” What’s the definition of reliable? Yeah, problem. Second, the smart software companies have to reveal what data were used to train a model. Yeah, techno feudalists totally embrace transparency. And, third, “ablate” or “remove” “particular data” from a model. Yeah, who defines “bad” or “particular” data. How about the techno feudalists, their contractors, or their former employees.

For now, let’s just use our mobile phone to access MSFT Copilot and fix our attention on the screen. What’s to worry about? The person in the cartoon put the humanoid form in the apparently risky and possibly dumb position. What could go wrong?

Stephen E Arnold, December 27, 2023

AI and the Obvious: Hire Us and Pay Us to Tell You Not to Worry

December 26, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Accenture Chief Says Most Companies Not Ready for AI Rollout.” The paywalled write up is an opinion from one of Captain Obvious’ closest advisors. The CEO of Accenture (a general purpose business expertise outfit) reveals some gems about artificial intelligence. Here are three which caught my attention.

#1 — “Sweet said executives were being “prudent” in rolling out the technology, amid concerns over how to protect proprietary information and customer data and questions about the accuracy of outputs from generative AI models.”

image

The secret to AI consulting success: Cost, fear of failure, and uncertainty or CFU. Thanks, MSFT Copilot. Good enough.

Arnold comment: Yes, caution is good because selling caution consulting generates juicy revenues. Implementing something that crashes and burns is a generally bad idea.

#2 — “Sweet said this corporate prudence should assuage fears that the development of AI is running ahead of human abilities to control it…”

Arnold comment: The threat, in my opinion, comes from a handful of large technology outfits and from the legions of smaller firms working overtime to apply AI to anything that strikes the fancy of the entrepreneurs. These outfits think about sizzle first, consequences maybe later. Much later.

# 3 — ““There are no clients saying to me that they want to spend less on tech,” she said. “Most CEOs today would spend more if they could. The macro is a serious challenge. There are not a lot of green shoots around the world. CEOs are not saying 2024 is going to look great. And so that’s going to continue to be a drag on the pace of spending.”

Arnold comment: Great opportunity to sell studies, advice, and recommendations when customers are “not saying 2024 is going to look great.” Hey, what’s “not going to look great” mean?

The obvious is — obvious.

Stephen E Arnold, December 26, 2023

AI Is Here to Help Blue Chip Consulting Firms: Consultants, Tighten Your Seat Belts

December 26, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Deloitte Is Looking at AI to Help Avoid Mass Layoffs in Future.” The write up explains that blue chip consulting firms (“the giants of the consulting world”) have been allowing many Type A’s to find their future elsewhere. (That’s consulting speak for “You are surplus,” “You are not suited for another team,” or “Hasta la vista.”) The message Deloitte is sending strikes me as, “We are leaders in using AI to improve the efficiency of our business. You (potential customers) can hire us to implement AI strategies and tactics to deliver the same turbo boost to your firm.) Deloitte is not the only “giant” moving to use AI to improve “efficiency.” The big folks and the mid-tier players are too. But let’s look at the Deloitte premise in what I see as a PR piece.

image

Hey, MSFT Copilot. Good enough. Your colleagues do have experience with blue-chip consulting firms which obviously assisted you.

The news story explains that Deloitte wants to use AI to help figure out who can be billed at startling hourly fees for people whose pegs don’t fit into the available round holes. But the real point of the story is that the “giants” are looking at smart software to boost productivity and margins. How? My answer is that management consulting firms are “experts” in management. Therefore, if smart software can make management better, faster, and cheaper, the “giants” have to use best practices.

And what’s a best practice in the context of the “giants” and the “avoid mass layoffs” angle? My answer is, “Money.”

The big dollar items for the “giants” are people and their associated costs, travel, and administrative tasks. Smart software can replace some people. That’s a no brainer. Dump some of the Type A’s who don’t sell big dollar work, winnow those who are not wedded to the “giant” firm, and move the administrivia to orchestrated processes with smart software watching and deciding 24×7.

Imagine the “giants” repackaging these “learnings” and then selling the information about how to and payoffs to less informed outfits. Once that is firmly in mind, the money for the senior partners who are not on on the “hasta la vista” list goes up. The “giants” are not altruistic. The firms are built fro0m the ground up to generate cash, leverage connections, and provide services to CEOs with imposter syndrome and other issues.

My reaction to the story is:

  1. Yep, marketing. Some will do the Harvard Business Review journey; others will pump out white papers; many will give talks to “preferred” contacts; and others will just imitate what’s working for the “giants”
  2. Deloitte is redefining what expertise it will require to get hired by a “giant” like the accounting/consulting outfit
  3. The senior partners involved in this push are planning what to do with their bonuses.

Are the other “giants” on the same path? Yep. Imagine. Smart software enabled “giants” making decisions for the organizations able to pay for advice, insight, and warm embrace of AI-enabled humanoids. What’s the probability of success? Close enough for horseshoes. and even bigger money for some blue chip professionals. Did Deloitte over hiring during the pandemic?

Of course not, the tactic was part of the firm’s plan to put AI to a real world test. Sound good. I cannot wait until the case studies become available.

Stephen E Arnold, December 26, 2023

An Important, Easily Pooh-Poohed Insight

December 24, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Dinobaby here. I am on the regular highway, not the information highway. Nevertheless l want to highlight what I call an “easily poohpoohed factoid. The source of the item this morning is an interview titled “Google Cloud Exec: Enterprise AI Is Game-Changing, But Companies Need to Prepare Their Data.”

I am going to skip the PR baloney, the truisms about Google fumbling the AI ball, and rah rah about AI changing everything. Let me go straight to factoid which snagged my attention:

… at the other side of these projects, what we’re seeing is that organizations did not have their data house in order. For one, they had not appropriately connected all the disparate data sources that make up the most effective outputs in a model. Two, so many organizations had not cleansed their data, making certain that their data is as appropriate and high value as possible. And so we’ve heard this forever — garbage in, garbage out. You can have this great AI project that has all the tenets of success and everybody’s really excited. Then, it turns out that the data pipeline isn’t great and that the data isn’t streamlined — all of a sudden your predictions are not as accurate as they could or should have been.

Why are points about data significant?

First, investors, senior executives, developers, and the person standing on line with you at Starbucks dismisses data normalization as a solved problem. Sorry, getting the data boat to float is a work in progress. Few want to come to grips with the issue.

Second, fixing up data is expensive. Did you ever wonder why the Stanford president made up data, forcing his resignation? The answer is that the “cost of fixing up data is too high.” If the president of Stanford can’t do it, is the run-fo-the-mill fast talking AI guru different? Answer: Nope.

Third, knowledge of exception folders and non-conforming data is confined to a small number of people. Most will explain what is needed to make a content intake system work. However, many give up because the cloud of unknowing is unlikely to disperse.

The bottom line is that many data sets are not what senior executives, marketers, or those who use the data believe they are. The Google comment — despite Google’s sketchy track record in plain honest talk — is mostly correct.

So what?

  1. Outputs are often less useful than many anticipated. But if the user is uninformed or the downstream system uses whatever is pushed to it, no big deal.
  2. The thresholds and tweaks needed to make something semi useful are not shared, discussed, or explained. Keep the mushrooms in the dark and feed them manure. What do you get? Mushrooms.
  3. The graphic outputs are eye candy and distracting. Look here, not over there. Sizzle sells and selling is important.

Net net: Data are a problem. Data have been due to time and cost issues. Data will remain a problem because one can sidestep a problem few recognize and those who do recognize the pit find a short cut. What’s this mean for AI? Those smart systems will be super. What’s in your AI stocking this year?

Stephen E Arnold, December 24, 2023

Bugged? Hey, No One Can Get Our Data

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The Obscure Google Deal That Defines America’s Broken Privacy Protections.” In the cartoon below, two young people are confident that their lunch will be undisturbed. No “bugs” will chow down on their hummus, sprout sandwiches, or their information. What happens, however, is that the young picnic fans cannot perceive what is out of sight. Are these “bugs” listening? Yep. They are. 24×7.

image

What the young fail to perceive is that “bugs” are everywhere. These digital creatures are listening, watching, harvesting, and consuming every scrap of information. The image of the picnic evokes an experience unfolding in real time. Thanks, MSFT Copilot. My notion of “bugs” is obviously different from yours. Good enough and I am tired of finding words you can convert to useful images.

The essay explains:

While Meta, Google, and a handful of other companies subject to consent decrees are bound by at least some rules, the majority of tech companies remain unfettered by any substantial federal rules to protect the data of all their users, including some serving more than a billion people globally, such as TikTok and Apple.

The situation is simple: Major centers of techno gravity remain unregulated. Law makers, regulators, and “users” either did not understand or just believed what lobbyists told them. The senior executives of certain big firms smiled, said “Senator, thank you for that question,” and continued to build out their “bug” network. Do governments want to lose their pride of place with these firms? Nope. Why? Just reference bad actors who commit heinous acts and invoke “protect our children.” When these refrains from the techno feudal playbook sound, calls to take meaningful action become little more than a faint background hum.

But the article continues:

…there is diminishing transparency about how Google’s consent decree operates.

I think I understand. Google-type companies pretend to protect “privacy.” Who really knows? Just ask a Google professional. The answer in my experience is, “Hey, dude, I have zero idea.”

How does Wired, the voice of the techno age, conclude its write up? Here you go:

The FTC agrees that a federal privacy law is long overdue, even as it tries to make consent decrees more powerful. Samuel Levine, director of the FTC’s Bureau of Consumer Protection, says that successive privacy settlements over the years have become more limiting and more specific to account for the growing, near-constant surveillance of Americans by the technology around them. And the FTC is making every effort to enforce the settlements to the letter…

I love the “every effort.” The reality is that the handling of online data collection presages the trajectory for smart software. We live with bugs. Now those bugs can “think”, adapt, and guide. And what’s the direction in which we are now being herded? Grim, isn’t it?

Stephen E Arnold, December 23, 2023

A High Profile Religious Leader: AI? Yeah, Well, Maybe Not So Fast, Folks

December 22, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The trusted news outfit Thomson Reuters put out a story about the thoughts of the Pope, the leader of millions of Catholics. Presumably many of these people use ChatGPT-type systems to create content. (I wonder if Leonardo would have used an OpenAI system to crank out some art work. He was an innovator. My hunch is that he would have given MidJourney-type smart software a whirl.)

image

A group of religious individuals thinking about artificial intelligence. Thanks, MidJourney, a good enough engraving.

Pope Francis Calls for Binding Global Treaty to Regulate AI” reports that Pope Francis wants someone to create a legally binding international treaty. The idea is that AI numerical recipes would be prevented from replacing humans with good old human values. The idea is that AI would output answers, and humans would use those answers to find pizza joints, develop smart weapons, and eliminate carbon by eliminating carbon generating entities (maybe humans?).

The trusted news outfit’s report included this quote from the Pope:

I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms…

The Pope mentioned a need to avoid a technological dictatorship. He added:

Research on emerging technologies in the area of so-called Lethal Autonomous Weapon Systems, including the weaponization of artificial intelligence, is a cause for grave ethical concern. Autonomous weapon systems can never be morally responsible subjects…

Several observations are warranted:

  1. Is this a UN job or is some other entity responsible to obtain consensus and effective enforcement?
  2. Who develops the criteria for “good” AI, “neutral” AI, and “bad” AI?
  3. What are the penalties for implementing “bad” AI?

For me the Pope’s statement is important. It may be difficult to implement without a global dictatorship or a sudden change in how informed people debate and respond to difficult issues. From my point of view, the Pope should worry. When I look at the images of the Four Horsemen of the Apocalypse, the riders remind of four high profile leaders in AI. That’s my imagination reading into the depictions of conquest, war, famine, and death.

Stephen E Arnold, December 22, 2023

Palantir to Solve Banking IT Problems: Worth Monitoring

December 21, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Palantir Technologies recast itself as an artificial intelligence company. The firm persevered in England and positioned itself as the one best choice to wrestle the UK National Health Service’s IT systems into submission. Now, the company founded 20 years ago is going to demonstrate its business chops in a financial institution.

image

A young IT wizard explains to a group of senior executives, “Our team can deal with mainframe software and migrate the operations of this organization to a modern, scalable, more economical, and easier-to-use system. I am wearing a special seeing stone, so trust me.” Thanks, MSFT Copilot. It took five tries to get a good enough cartoon.

Before referencing the big, new job Palantir has “won,” I want to mention an interesting 2016 write up called “Interviewing My Mother, a Mainframe COBOL Programmer” by Tom Jordan. I want to point out that I am not suggesting that financial institutions have not solved their IT problems. I simply don’t know. But my poking around the Charlie Javice matter, my hunch is that banks IT systems have not changed significantly in the last seven years. Had the JPMC infrastructure been humming along with real-time data checks and smart software to determine if data were spoofed, those $175 million dollars would not have flown the upscale coop at JP Morgan Chase. For some Charlie Javice detail, navigate to this CNBC news item.

Here are several points about financial institutions IT infrastructure from the 2016 mom chat:

  1. Many banks rely on COBOL programs
  2. Those who wrote the COBOL programs may be deceased or retired
  3. Newbies may not know how undocumented legacy COBOL programs interact with other undocumented programs
  4. COBOL is not the go-to language for most programmers
  5. The databases for some financial institutions are not widely understood; for example, DL/1 / IMS, so some programmers have to learn something new about something old
  6. Moving data around can be tricky and the documentation about what an upstream system does and how it interacts with a downstream system may be fuzzy or unknown.

Anyone who has experience fiddling with legacy financial systems knows that changes require an abundance of caution. An error can wreck financial havoc. For more “color” about legacy systems used in banks, consult Mr. Jordan’s mom interview.

I thought about Mr. Jordan’s essay when I read “Palantir and UniCredit Renew Digital Transformation Partnership.” Palantir has been transforming UniCredit for five years, but obviously more work is needed. From my point of view, Palantir is a consulting company which does integration. Thus, the speed of the transformation is important. Time is money. The write up states:

The partnership will see UniCredit deploy the Palantir Foundry operating system to accelerate the bank’s digital transformation and help increase revenue and mitigate risks.

I like the idea of a financial services institution increasing its revenue and reducing its risk.

The report about the “partnership” adds:

Palantir and UniCredit first partnered in 2018 as the bank sought technology that could streamline sales spanning jurisdictions, better operationalize machine learning and artificial intelligence, enforce policy compliance, and enhance decision making on the front lines. The bank chose Palantir Foundry as the operating system for the enterprise, leveraging a single, open and integrated platform across entities and business lines and enabling synergies across the Group.

Yep, AI is part of the deal. Compliance management is part of the agreement. Plus, Palantir will handle cross jurisdictional sales. Also, bank managers will make better decisions. (One hopes the JPMC decision about the fake data, revenues, and accounts will not become an issue for UniCredit.)

Palantir is optimistic about the partnership renewal and five years of billing for what may be quite difficult work to do without errors and within the available time and resource window. A Palantir executive said, according to the article:

Palantir has long been a proud partner to some of the world’s top financial institutions. We’re honored that UniCredit has placed its confidence in Palantir once again and look forward to furthering the bank’s digital transformation.

Will Palantir be able to handle super-sized jobs like the NHS work and the UniCredit project? Personally I will be watching for news about both of these contract wins. For a 20 year old company with its roots in the intelligence community, success in health care and financial services will mark one of the few times, intelware has made the leap to mainstream commercial problem solving.

The question is, “Why have the other companies failed in financial services modernization?” I have a lecture about that. Curious to know more. Write benkent2020 at yahoo dot com, and one of my team will respond to you.

Stephen E Arnold, December 18, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta