Indifference or Carelessness: The Security Wrecks from Georgia Tech
September 4, 2024
DOJ Sues Georgia Tech for DOD-Related Cybersecurity Violations
The Justice Department takes cybersecurity standards for our military very seriously. Just ask Georgia Tech University. Nextgov/FCW reports, “DOJ Suit Claims Georgia Tech ‘Knowingly Failed’ to Meet Cyber Standards for DOD Contracts.” The suit began in 2022 with a whistleblower lawsuit filed by two members of the university’s cybersecurity compliance team. They did so under the DOJ’s Civil Cyber-Fraud Initiative. Now the DOJ has joined the fray. Reporter Edward Graham tells us:
“In a press release, DOJ alleged that the institutions committed numerous violations of the Department of Defense’s cybersecurity policy in the years prior to the whistleblower complaint. Among the most serious allegations was the claim that ‘Georgia Tech and [Georgia Tech Research Corporation] submitted a false cybersecurity assessment score to DOD for the Georgia Tech campus’ in December 2020. … The lawsuit also asserted that the Astrolavos Lab at Georgia Tech previously ‘failed to develop and implement a system security plan, which is required by DOD cybersecurity regulations.’ Once the security document was finally implemented in February 2020, the complaint said the university ‘failed to properly scope that plan to include all covered laptops, desktops and servers.’ Additionally, DOJ alleged that the Astrolavos Lab did not use any antivirus or antimalware programs on its devices until December 2021. The university reportedly allowed the lab to refuse the installation of the software ‘in violation of both federal cybersecurity requirements and Georgia Tech’s own policies’ at the request of its director.”
Georgia Tech disputes the charges. It claims there was no data breach or data leak, the information involved was not confidential anyway, and the government had stated this research did not require cybersecurity restrictions. Really? Then why the (allegedly) falsified cybersecurity score? The suit claims the glowing self-reported score for the Georgia Tech campus:
“… was for a ‘fictitious’ or ‘virtual’ environment and did not apply to any covered contracting system at Georgia Tech that could or would ever process, store or transmit covered defense information.”
That one will be hard to explain away. Other entities with DOD contractor will want to pay attention—Graham states the DOJ is cracking down on contractors that lie about their cyber protections.
Cynthia Murrell, September 4, 2024
Google Synthetic Content Scaffolding
September 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Google posted what I think is an important technical paper on the arXiv service. The write up is “Towards Realistic Synthetic User-Generated Content: A Scaffolding Approach to Generating Online Discussions.” The paper has six authors and presumably has the grade of “A”, a mark not award to the stochastic parrot write up about Google-type smart software.
For several years, Google has been exploring ways to make software that would produce content suitable for different use cases. One of these has been an effort to use transformer and other technology to produce synthetic data. The idea is that a set of real data is mimicked by AI so that “real” data does not have to be acquired, intercepted, captured, or scraped from systems in the real-time, highly litigious real world. I am not going to slog through the history of smart software and the research and application of synthetic data. If you are curious, check out Snorkel and the work of the Stanford Artificial Intelligence Lab or SAIL.
The paper I referenced above illustrates that Google is “close” to having a system which can generate allegedly realistic and good enough outputs to simulate the interaction of actual human beings in an online discussion group. I urge you to read the paper, not just the abstract.
Consider this diagram (which I know is impossible to read in this blog format so you will need the PDF of the cited write up):
The important point is that the process for creating synthetic “human” online discussions requires a series of steps. Notice that the final step is “fine tuned.” Why is this important? Most smart software is “tuned” or “calibrated” so that the signals generated by a non-synthetic content set are made to be “close enough” to the synthetic content set. In simpler terms, smart software is steered or shaped to match signals. When the match is “good enough,” the smart software is good enough to be deployed either for a test, a research project, or some use case.
Most of the AI write ups employ steering, directing, massaging, or weaponizing (yes, weaponizing) outputs to achieve an objective. Many jobs will be replaced or supplemented with AI. But the jobs for specialists who can curve fit smart software components to produce “good enough” content to achieve a goal or objective will remain in demand for the foreseeable future.
The paper states in its conclusion:
While these results are promising, this work represents an initial attempt at synthetic discussion thread generation, and there remain numerous avenues for future research. This includes potentially identifying other ways to explicitly encode thread structure, which proved particularly valuable in our results, on top of determining optimal approaches for designing prompts and both the number and type of examples used.
The write up is a preliminary report. It takes months to get data and approvals for this type of public document. How far has Google come between the idea to write up results and this document becoming available on August 15, 2024? My hunch is that Google has come a long way.
What’s the use case for this project? I will let younger, more optimistic minds answer this question. I am a dinobaby, and I have been around long enough to know a potent tool when I encounter one.
Stephen E Arnold, September 3, 2024
Another Big Consulting Firms Does Smart Software… Sort Of
September 3, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Will programmers and developers become targets for prosecution when flaws cripple vital computer systems? That may be a good idea because pointing to the “algorithm” as the cause of a problem does not seem to reduce the number of bugs, glitches, and unintended consequences of software. A write up which itself may be a blend of human and smart software suggests change is afoot.
Thanks, MSFT Copilot. Good enough.
“Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits” reports that software crafted by the services firm Deloitte did not work as the State of Tennessee assumed. Yep, assume. A very interesting word.
The article explains:
The TennCare Connect system—built by Deloitte and other contractors for more than $400 million—is supposed to analyze income and health information to automatically determine eligibility for benefits program applicants. But in practice, the system often doesn’t load the appropriate data, assigns beneficiaries to the wrong households, and makes incorrect eligibility determinations, according to the decision from Middle District of Tennessee Judge Waverly Crenshaw Jr.
At one time, Deloitte was an accounting firm. Then it became a consulting outfit a bit like McKinsey. Well, a lot like that firm and other blue-chip consulting outfits. In its current manifestation, Deloitte is into technology, programming, and smart software. Well, maybe the software is smart but the programmers and the quality control seem to be riding in a different school bus from some other firms’ technical professionals.
The write up points out:
Deloitte was a major beneficiary of the nationwide modernization effort, winning contracts to build automated eligibility systems in more than 20 states, including Tennessee and Texas. Advocacy groups have asked the Federal Trade Commission to investigate Deloitte’s practices in Texas, where they say thousands of residents are similarly being inappropriately denied life-saving benefits by the company’s faulty systems.
In 2016, Cathy O’Neil published Weapons of Math Destruction. Her book had a number of interesting examples of what goes wrong when careless people make assumptions about numerical recipes. If she does another book, she may include this Deloitte case.
Several observations:
- The management methods used to create these smart systems require scrutiny. The downstream consequences are harmful.
- The developers and programmers can be fired, but the failure to have remediating processes in place when something unexpected surfaces must be part of the work process.
- Less informed users and more smart software strikes me as a combustible mixture. When a system ignites, the impacts may reverberate in other smart systems. What entity is going to fix the problem and accept responsibility? The answer is, “No one” unless there are significant consequences.
The State of Tennessee’s experience makes clear that a “brand name”, slick talk, an air of confidence, and possibly ill-informed managers can do harm. The opioid misstep was bad. Now imagine that type of thinking in the form of a fast, indifferent, and flawed “system.” Firing a 25 year old is not the solution.
Stephen E Arnold, September 3, 2024
Consensus: A Gen AI Search Fed on Research, not the Wild Wild Web
September 3, 2024
How does one make an AI search tool that is actually reliable? Maybe start by supplying it with only peer-reviewed papers instead of the whole Internet. Fast Company sings the praises of Consensus in, “Google Who? This New Service Actually Gets AI Search Right.” Writer JR Raphael begins by describing why most AI-powered search engines, including Google, are terrible:
“The problem with most generative AI search services, at the simplest possible level, is that they have no idea what they’re even telling you. By their very nature, the systems that power services like ChatGPT and Gemini simply look at patterns in language without understanding the actual context. And since they include all sorts of random internet rubbish within their source materials, you never know if or how much you can actually trust the info they give you.”
Yep, that pretty much sums it up. So, like us, Raphael was skeptical when he learned of yet another attempt to bring generative AI to search. Once he tried the easy-to-use Consensus, however, he was convinced. He writes:
“In the blink of an eye, Consensus will consult over 200 million scientific research papers and then serve up an ocean of answers for you—with clear context, citations, and even a simple ‘consensus meter’ to show you how much the results vary (because here in the real world, not everything has a simple black-and-white answer!). You can dig deeper into any individual result, too, with helpful features like summarized overviews as well as on-the-fly analyses of each cited study’s quality. Some questions will inevitably result in answers that are more complex than others, but the service does a decent job of trying to simplify as much as possible and put its info into plain English. Consensus provides helpful context on the reliability of every report it mentions.”
See the post for more on using the web-based app, including a few screenshots. Raphael notes that, if one does not have a specific question in mind, the site has long lists of its top answers for curious users to explore. The basic service is free to search with no query cap, but creators hope to entice us with an $8.99/ month premium plan. Of course, this service is not going to help with every type of search. But if the subject is worthy of academic research, Consensus should have the (correct) answers.
Cynthia Murrell, September 3, 2024
Elastic N.V. Faces a New Search Challenge
September 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Elastic N.V. and Shay Banon are what I call search survivors. Gone are Autonomy (mostly), Delphis, Exalead, Fast Search & Transfer (mostly), Vivisimo, and dozens upon dozens of companies who sought to put an organization’s information at an employee’s fingertips. The marketing lingo of these and other now-defunct enterprise search vendors is surprisingly timely. Once can copy and paste chunks of Autonomy’s white papers into the OpenAI ChatGPT search is coming articles and few would notice that the assertions and even the word choice was more than 40 years old.
Elastic N.V. survived. It rose from a failed search system called Compass. Elastic N.V. recycled the Lucene libraries, released the open source Elasticsearch, and did an IPO. Some people made a lot of money. The question is, “Will that continue?”
I noted the Silicon Angle article “Elastic Shares Plunge 25% on Lower Revenue Projections Amid Slower Customer Commitments.” That write up says:
In its earnings release, Chief Executive Officer Ash Kulkarni started positively, noting that the results in the quarter we solid and outperformed previous guidance, but then comes the catch and the reason why Elastic stock is down so heavily after hours. “We had a slower start to the year with the volume of customer commitments impacted by segmentation changes that we made at the beginning of the year, which are taking longer than expected to settle,” Kulkarni wrote. “We have been taking steps to address this, but it will impact our revenue this year.” With that warning, Elastic said that it expects fiscal second-quarter adjusted earnings per share of 37 to 39 cents on revenue of $353 million to $355 million. The earnings per share forecast was ahead of the 34 cents expected by analysts, but revenue fell short of an expected $360.8 million. It was a similar story for Elastic’s full-year outlook, with the company forecasting earnings per share of $1.52 to $1.56 on revenue of $1.436 billion to $1.444 billion. The earnings per share outlook was ahead of an expected $1.42, but like the second quarter outlook, revenue fell short, as analysts had expected $1.478 billion.
Elastic N.V. makes money via service and for-fee extras. I want to point out that the $300 million or so revenue numbers are good. Elastic B.V. has figured out a business model that has not required [a] fiddling the books, [b] finding a buyer as customers complain about problems with the search software, [c] the sources of financing rage about cash burn and lousy revenue, [d] government investigators are poking around for tax and other financial irregularities, [e] the cost of running the software is beyond the reach of the licensee, or [f] the system simply does not search or retrieve what the user wanted or expected.
Elastic B.V. and its management team may have a challenge to overcome. Thanks, OpenAI, the MSFT Copilot thing crashed today.
So what’s the fix?
A partial answer appears in the Elastic B.V. blog post titled “Elasticsearch Is Open Source, Again.” The company states:
The tl;dr is that we will be adding AGPL as another license option next to ELv2 and SSPL in the coming weeks. We never stopped believing and behaving like an open source community after we changed the license. But being able to use the term Open Source, by using AGPL, an OSI approved license, removes any questions, or fud, people might have.
Without slogging through the confusion between what Elastic B.V. sells, the open source version of Elasticsearch, the dust up with Amazon over its really original approach to search inspired by Elasticsearch, Lucid Imagination’s innovation, and the creaking edifice of A9, Elastic B.V. has released Elasticsearch under an additional open source license. I think that means one can use the software and not pay Elastic B.V. until additional services are needed. In my experience, most enterprise search systems regardless of how they are explained need the “owner” of the system to lend a hand. Contrary to the belief that smart software can do enterprise search right now, there are some hurdles to get over.
Will “going open source again” work?
Let me offer several observations based on my experience with enterprise search and retrieval which reaches back to the days of punch cards and systems which used wooden rods to “pull” cards with a wanted tag (index term):
- When an enterprise search system loses revenue momentum, the fix is to acquire companies in an adjacent search space and use that revenue to bolster the sales prospects for upsells.
- The company with the downturn gilds the lily and seeks a buyer. One example was the sale of Exalead to Dassault Systèmes which calculated it was more economical to buy a vendor than to keep paying its then current supplier which I think was Autonomy, but I am not sure. Fast Search & Transfer pulled of this type of “exit” as some of the company’s activities were under scrutiny.
- The search vendor can pivot from doing “search” and morph into a business intelligence system. (By the way, that did not work for Grok.)
- The company disappears. One example is Entopia. Poof. Gone.
I hope Elastic B.V. thrives. I hope the “new” open source play works. Search — whether enterprise or Web variety — is far from a solved problem. People believe they have the answer. Others believe them and license the “new” solution. The reality is that finding information is a difficult challenge. Let’s hope the “downturn” and “negativism” goes away.
Stephen E Arnold, September 2, 2024
Social Media Cowboys, the Ranges Are Getting Fences
September 2, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Several recent developments suggest that the wide open and free ranges are being fenced in. How can I justify this statement, pardner? Easy. Check out these recent developments:
- The founder of Telegram is Pavel Durov. He was arrested on Saturday, August 26, 2024, at Le Bourget airport near Paris
- TikTok will stand trial for the harms to children caused by the “algorithm”
- Brazil has put up barbed wire to keep Twitter (now X.com) out of the country.
I am not the smartest dinobaby in the rest home, but even I can figure out that governments are taking action after decades of thinking about more weighty matters than the safety of children, the problems social media causes for parents and teachers, and the importance of taking immediate and direct action against those breaking laws.
A couple of social media ranchers are wondering about the actions of some judicial officials. Thanks, MSFT Copilot. Good enough like most software today.
Several questions seem to be warranted.
First, the actions are uncoordinated. Brazil, France, and the US have reached conclusions about different social media companies and acted without consulting one another. How quickly with other countries consider their particular situation and reach similar conclusions about free range technology outfits?
Second, why have legal authorities and legislators in many countries failed to recognize the issues radiating from social media and related technology operators? Was it the novelty of technology? Was it a lack of technology savvy? Was it moral or financial considerations?
Third, how will the harms be remediated? Is it enough to block a service or change penalties for certain companies?
I am personally not moved by those who say speech must be free and unfettered. Sorry. The obvious harms outweigh that self-serving statement from those who are mesmerized by online or paid to have that idea and promote it. I understand that a percentage of students will become high achievers with or without traditional reading, writing, and arithmetic. However, my concern is the other 95 percent of students. Structured learning is necessary for a society to function. That’s why there is education.
I don’t have any big ideas about ameliorating the obvious damage done by social media. I am a dinobaby and largely untouched by TikTok-type videos or Facebook-type pressures. I am, however, delighted to be able to cite three examples of long overdue action by Brazilian, French, and US officials. Will some of these wild west digital cowboys end up in jail? I might support that, pardner.
Stephen E Arnold, September 2, 2024
Google Claims It Fixed Gemini’s “Degenerate” People
September 2, 2024
History revision is a problem. It’s been a problem for…well…since the start of recorded history. The Internet and mass media are infamous for being incorrect about historical facts, but image generating AI, like Google’s Gemini, is even worse. Tech Crunch explains what Google did to correct its inaccurate algorithm: “Google Says It’s Fixed Gemini’s People-Generating Feature.”
Google released Gemini in early 2023, then over a year later paused the chatbot for being too “woke,”“politically incorrect,” and “historically inaccurate.” The worst of Gemini’s offending actions was when it (for example) was asked to depict a Roman legion as ethnically diverse which fit the woke DEI agenda, while when it was asked to make an equally ethnically diverse Zulu warrior army Gemini only returned brown-skinned people. The latter is historically accurate, because Google doesn’t want to offend western ethnic minorities and, of course, Europe (where light skinned pink people originate) was ethnically diverse centuries ago.
Everything was A OK, until someone invoked Godwin’s Law by asking Gemini to generate (degenerate [sic]) an image of Nazis. Gemini returned an ethnically diverse picture with all types of Nazis, not the historically accurate light-skinned Germans-native to Europe.
Google claims it fixed Gemini and it took way longer than planned. The people generative feature is only available to paid Gemini plans. How does Google plan to make its AI people less degenerative? Here’s how:
“According to the company, Imagen 3, the latest image-generating model built into Gemini, contains mitigations to make the people images Gemini produces more “fair.” For example, Imagen 3 was trained on AI-generated captions designed to ‘improve the variety and diversity of concepts associated with images in [its] training data,’ according to a technical paper shared with TechCrunch. And the model’s training data was filtered for “safety,” plus ‘review[ed] … with consideration to fairness issues,’ claims Google…;We’ve significantly reduced the potential for undesirable responses through extensive internal and external red-teaming testing, collaborating with independent experts to ensure ongoing improvement,” the spokesperson continued. ‘Our focus has been on rigorously testing people generation before turning it back on.’”
Google will eventually make it work and the company is smart to limit Gemini’s usage to paid subscriptions. Limiting the user pool means Google can better control the chatbot and (if need be) turn it off. It will work until bad actors learn how to abuse the chatbot again for their own sheets and giggles.
Whitney Grace, September 2, 2024
The Seattle Syndrome: Definitely Debilitating
August 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I think the film “Sleepless in Seattle” included dialog like this:
What do they call it when everything intersects?
The Bermuda Triangle.”
Seattle has Boeing. The company is in the news not just for doors falling off its aircraft. The outfit has stranded two people in earth orbit and has to let Elon Musk bring them back to earth. And Seattle has Amazon, an outfit that stands behind the products it sells. And I have to include Intel Labs, not too far from the University of Washington, which is famous in its own right for many things.
Two job seekers discuss future opportunities in some of Seattle and environ’s most well-known enterprises. The image of the city seems a bit dark. Thanks, MSFT Copilot. Are you having some dark thoughts about the area, its management talent pool, and its commitment to ethical business activity? That’s a lot of burning cars, but whatever.
Is Seattle a Bermuda Triangle for large companies?
This question invites another; specifically, “Is Microsoft entering Seattle’s Bermuda Triangle?
The giant outfit has entered a deal with the interesting specialized software and consulting company Palantir Technologies Inc. This firm has a history of ups and downs since its founding 21 years ago. Microsoft has committed to smart software from OpenAI and other outfits. Artificial intelligence will be “in” everything from the Azure Cloud to Windows. Despite concerns about privacy, Microsoft wants each Windows user’s machine to keep screenshot of what the user “does” on that computer.
Microsoft seems to be navigating the Seattle Bermuda Triangle quite nicely. No hints of a flash disaster like the sinking of the sailing yacht Bayesian. Who could have predicted that? (That’s a reminder that fancy math does not deliver 1.000000 outputs on a consistent basis.
Back to Seattle. I don’t think failure or extreme stress is due to the water. The weather, maybe? I don’t think it is the city government. It is probably not the multi-faceted start up community nor the distinctive vocal tones of its most high profile podcasters.
Why is Seattle emerging as a Bermuda Triangle for certain firms? What forces are intersecting? My observations are:
- Seattle’s business climate is a precursor of broader management issues. I think it is like the pigeons that Greeks examined for clues about their future.
- The individuals who works at Boeing-type outfits go along with business processes modified incrementally to ignore issues. The mental orientation of those employed is either malleable or indifferent to downstream issues. For example, Windows update killed printing or some other function. The response strikes me as “meh.”
- The management philosophy disconnects from users and focuses on delivering financial results. Those big houses come at a cost. The payoff is personal. The cultural impacts are not on the radar. Hey, those quantum Horse Ridge things make good PR. What about the new desktop processors? Just great.
Net net: I think Seattle is a city playing an important role in defining how businesses operate in 2024 and beyond. I wish I was kidding. But I am bedeviled by reminders of a space craft which issues one-way tickets, software glitches, and products which seem to vary from the online images and reviews. (Maybe it is the water? Bermuda Triangle water?)
Stephen E Arnold, August 30, 2024
Pavel Durov: Durable Appeal Despite Crypto, French Allegations, and a Travel Restriction
August 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Telegram, a Web3 crypto facilitator, is in the news because its Big Dog is a French dog house. He can roam free, but he cannot leave the country. I mention Pavel Durov, the brother of Nikolai who has two PhDs in his brain, because he has appeal. Allegedly he also has more than 100 children. I find Mr. Durov’s fecundity an anomaly if the information in “Men in Crypto Less Attractive to Women Than Cosplayers, Anime Buffs: Survey.” That story suggests that men in crypto will not be at the front of the line when it comes to fathering five score bambinos.
Thanks, Microsoft Copilot. Nice cosplay. Who is the fellow in the bunny suit?
The write up reports:
Crypto was seen as the ninth-most unattractive hobby for males, the Aug. 24 survey by the Date Psychology blog found, which was a convenience sample of 814 people, 48% of which were female. The authors noted that based on past surveys, their sample population disproportionately includes women of “high social status,” with a high level of education and who are predominately white.
I will not point out that the sample size seems a few cans short of a six pack nor that lack of an unbiased sample is usually a good idea. But the idea is interesting.
The article continues with what I think are unduly harsh words:
Female respondents were asked if they found a list of 74 hobbies either “attractive” or “unattractive.” Only 23.1% said crypto was an attractive hobby, while around a third found comic books and cosplaying attractive. It left crypto as the second-most unattractive so-called “nerd” hobby to women — behind collecting products from Funko, which makes pop culture and media-based bobblehead figures.
The article includes some interesting data:
The results show that females thought reading was the most attractive hobby for a man (98.2%), followed by knowing or learning a foreign language (95.6%) and playing an instrument (95.4%).
I heard that Pavel Durov, not the brother with the two PhD brain, has a knack for languages. He allegedly speaks Russian (seems logical. His parents are Russian.), French (seems logical. He has French citizenship.), “Persian” (seems logical he has UAE citizenship and lives in quite spartan quarters in Dubai.), and Saint Kitts and Nevis (seems logical that he would speak English and some Creole). Now that he is in France with only a travel restriction he can attend some anime and cosplay events. It is possible that Parisian crypto enthusiasts will have a “Crypto Night” at a bistro like Le Procope. In order to have more appeal, he may wear a git-up.
I would suggest that his billionaire status and “babes near me” function in Telegram might enhance his appeal. If he has more than 100 Durov bambinos, why not shoot for 200 or more? He is living proof that surveys are not 100 percent reliable.
Stephen E Arnold, August 30, 2024
What Is a Good Example of AI Enhancing Work Processes? Klarna
August 30, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Klarna is a financial firm in Sweden. (Did you know Sweden has a violence problem?) The country also has a company which is quite public about the value of smart software to its operations. “‘Our Chatbots Perform The Tasks Of 700 People’: Buy Now, Pay Later Company Klarna To Axe 2,000 Jobs As AI Takes On More Roles” reports:
Klarna has already cut over 1,000 employees and plans to remove nearly 2,000 more
Yep, that’s the use case. Smart software allows the firm’s leadership to terminate people. (Does that managerial attitude contribute to the crime problem in Sweden? Of course not. The company is just being efficient.)
The write up states:
Klarna claims that its AI-powered chatbot can handle the workload previously managed by 700 full-time customer service agents. The company has reduced the average resolution time for customer service inquiries from 11 minutes to two while maintaining consistent customer satisfaction ratings compared to human agents.
What’s the financial payoff for this leader in AI deployment? The write up says:
Klarna reported a 73 percent increase in average revenue per employee compared to last year.
Klarna, however, is humane. According to the article:
Notably, none of the workforce reductions have been achieved through layoffs. Instead, the company has relied on a combination of natural staff turnover and a hiring freeze implemented last year.
That’s a relief. Some companies would deploy Microsoft software with AI and start getting rid of people. The financial benefits are significant. Plus, as long as the company chugs along in good enough mode, the smart software delivers a win for the firm.
Are there any downsides? None in the write up. There is a financial payoff on the horizon. The article states:
In July [2024], Chrysalis Investments, a major Klarna investor, provided a more recent valuation estimate, suggesting that the fintech firm could achieve a valuation between 15 billion and 20 billion dollars in an initial public offering.
But what if the AI acts like a brake on firm’s revenue growth and sales? Hey, this is an AI success. Why be negative? AI is wonderful and Klarna’s customers appear to be thrilled with smart software. I personally love speaking to smart chatbots, don’t you?
Stephen E Arnold, August 30, 2024