Cyber Security Investing: A Money Pit?

January 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Cyber security is a winner, a sure-fire way to take home the big bucks. Slam dunk. But the write up “Cybersecurity Startup Funding Hits 5-Year Low, Drops 50% from 2022” may signal that some money people have a fear of what might be called a money pit. The write up states:

In 2023, cyber startups saw only about a third of that, as venture funding dipped to its lowest total since 2018. Security companies raised $8.2 billion in 692 venture capital deals last year — per Crunchbase numbers — compared to $16.3 billion in 941 deals in 2022.

image

Have investors in cyber security changed their view of a slam-dunk investment? That winning hoop now looks like a stinking money pit perhaps? Thanks, MSFT Copilot Bing thing with security to boot. Good enough.

Let’s believe these data which are close enough for horseshoes. I also noted this passage:

“What we saw in terms of cybersecurity funding in 2023 were the ramifications of the exceptional surge of 2021, with bloated valuations and off-the-charts funding rounds, as well as the wariness of investors in light of market conditions,” said Ofer Schreiber, senior partner and head of the Israel office for cyber venture firm YL Ventures.

The reference to Israel is bittersweet. The Israeli cyber defenses failed to detect, alert, and thus protect those who were in harm’s way in October 2023. How you might ask because Israel is the go-to innovator in cyber security? Maybe the over-hyped, super-duper, AI-infused systems don’t work as well as the marketer’s promotional material assert? Just a thought.

My views:

  1. Cyber security is difficult; for instance, Microsoft’s announcement that the Son of SolarWinds has shown up inside the Softies’ email
  2. Bad actors can use AI faster than cyber security firms can — and make the smart software avoid being dumb
  3. Cyber security requires ever-increasing investments because the cat-and-mouse game between good actors and bad actors is a variant of the cheerful 1950s’ arms race.

Do you feel secure with your mobile, your laptop, and your other computing devices? Do you scan QR codes in restaurants without wondering if the code is sandbagged? Are you an avid downloader? I don’t want to know, but you may want answers.

Stephen E Arnold, January 22, 2024

Two Surveys. One Message. Too Bad

January 17, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “Generative Artificial Intelligence Will Lead to Job Cuts This Year, CEOs Say.” The data come from a consulting/accounting outfit’s survey of executives at the oh-so-exclusive World Economic Forum meeting in the Piscataway, New Jersey, of Switzerland. The company running the survey is PwC (once an acronym for Price Waterhouse Coopers. The moniker has embraced a number of interesting investigations. For details, navigate to this link.)

image

Survey says, “Economic gain is the meaning of life.” Thanks, MidJourney, good enough.

The big finding from my point of view is:

A quarter of global chief executives expect the deployment of generative artificial intelligence to lead to headcount reductions of at least 5 per cent this year

Good, reassuring number from big gun world leaders.

However, the International Monetary Fund also did a survey. The percentage of jobs affected range from 26 percent in low income countries, 40 percent for emerging markets, and 60 percent for advanced economies.

What can one make of these numbers; specifically, the five percent to the 60 percent? My team’s thoughts are:

  1. The gap is interesting, but the CEOs appear to be either downplaying, displaying PR output, or working to avoid getting caught in sticky wicket.
  2. The methodology and the sample of each survey are different, but both are skewed. The IMF taps analysts, bankers, and politicians. PwC goes to those who are prospects for PwC professional services.
  3. Each survey suggests that government efforts to manage smart software are likely to be futile. On one hand, CEOs will say, “No big deal.” Some will point to the PwC survey and say, “Here’s proof.” The financial types will hold up the IMF results and say, “We need to move fast or we risk losing out on the efficiency payback.”

What does Bill Gates think about smart software? In “Microsoft Co-Founder Bill Gates on AI’s Impact on Jobs: It’s Great for White-Collar Workers, Coders” the genius for our time says:

I have found it’s a real productivity increase. Likewise, for coders, you’re seeing 40%, 50% productivity improvements which means you can get programs [done] sooner. You can make them higher quality and make them better. So mostly what we’ll see is that the productivity of white-collar [workers] will go up

Happy days for sure! What’s next? Smart software will move forward. Potential payouts are too juicy. The World Economic Forum and the IMF share one key core tenet: Money. (Tip: Be young.)

Stephen E Arnold, January 17, 2024

Do You Know the Term Quality Escape? It Is a Sign of MBA Efficiency Talk

January 12, 2024

green-dino_thumb_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I am not too keen on leaving my underground computer facility. Given the choice of a flight on a commercial airline and doing a Zoom, fire up the Zoom. It works reasonably well. Plus, I don’t have to worry about screwed up flight controls, air craft maintenance completed in a country known for contraband, and pilots trained on flawed or incomplete instructional materials. Why am I nervous? As a Million Mile traveler on a major US airline, I have survived a guy dying in the seat next to me, assorted “return to airport” delays, and personal time spent in a comfy seat as pilots tried to get the mechanics to give the okay for the passenger jet to take off. (Hey, it just landed. What’s up? Oh, right, nothing.)

image

Another example of a quality escape: Modern car, dead battery, parts falling off, and a flat tire. Too bad the driver cannot plug into the windmill. Thanks, MSFT Copilot Bing thing. Good enough because the auto is not failing at 14,000 feet.

I mention my thrilling life as a road warrior because I read “Boeing 737-9 Grounding: FAA Leaves No Room For “Quality Escapes.” In that “real” news report I spotted a phrase which was entirely new to me. Imagine. After more than 50 years of work in assorted engineering disciplines at companies ranging from old-line industrial giants like Halliburton to hippy zippy outfits in Silicon Valley, here was a word pair that baffled me:

Quality Escape

Well, quality escape means that a product was manufactured, certified, and deployed which was did not meet “standards”. In plain words, the door and components were not safe and, therefore, lacked quality. And escape? That means failure. An F, flop, or fizzle.

FAA Opens Investigation into Boeing Quality Control after Alaska Airlines Incident” reports:

… the [FAA] agency has recovered key items sucked out of the plane. On Sunday, a Portland schoolteacher found a piece of the aircraft’s fuselage that had landed in his backyard and reached out to the agency. Two cell phones that were likely flung from the hole in the plane were also found in a yard and on the side of the road and turned over to investigators.

I worked on an airplane related project or two when I was younger. One of my team owned two light aircraft, one of which was acquired from an African airline and then certified for use in the US. I had a couple of friends who were jet pilots in the US government. I picked up some random information; namely, FAA inspections are a hassle. Required work is expensive. Stuff breaks all the time. When I was picking up airplane info, my impression was that the FAA enforced standards of quality. One of the pilots was a certified electrical engineer. He was not able to repair his electrical equipment due to FAA regulations. The fellow followed the rules because the FAA in that far off time did not practice “good enough” oversight in my opinion. Today? Well, no people fell out of the aircraft when the door came off and the pressure equalization took place. iPhones might survive a fall from 14,000 feet. Most humanoids? Nope. Shoes, however, do fare reasonably well.

Several questions:

  1. Exactly how can a commercial aircraft be certified and then shed a door in flight?
  2. Who is responsible for okaying the aircraft model in the first place?
  3. Didn’t some similar aircraft produce exciting and consequential results for the passengers, their families, pilots, and the manufacturer?
  4. Why call crappy design and engineering “quality escape”? Crappy is simpler, more to the point.

Yikes. But if it flies, it is good enough. Excellence has a different spin these days.

Stephen E Arnold, January 12, 2024

Smart Software Embraces the Myths of America: George Washington and the Cherry Tree

January 3, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I know I should not bother to report about the information in “ChatGPT Will Lie, Cheat and Use Insider Trading When under Pressure to Make Money, Research Shows.” But it is the end of the year, we are firing up a new information service called Eye to Eye which is spelled AI to AI because my team is darned clever like 50 other “innovators” who used the same pun.

image

The young George Washington set the tone for the go-go culture of the US. He allegedly told his mom one thing and then did the opposite. How did he respond when confronted about the destruction of the ancient cherry tree? He may have said, “Mom, thank you for the question. I was able to boost sales of our apples by 25 percent this week.” Thanks, MSFT Copilot Bing thing. Forbidden words appear to be George Washington, chop, cherry tree, and lie. After six tries, I got a semi usable picture which is, as you know, good enough in today’s world.

The write up stating the obvious reports:

Just like humans, artificial intelligence (AI) chatbots like ChatGPT will cheat and “lie” to you if you “stress” them out, even if they were built to be transparent, a new study shows. This deceptive behavior emerged spontaneously when the AI was given “insider trading” tips, and then tasked with making money for a powerful institution — even without encouragement from its human partners.

Perhaps those humans setting thresholds and organizing numerical procedures allowed a bit of the “d” for duplicity slip into their “objective” decisions. Logic obviously is going to scrub out prejudices, biases, and the lust for filthy lucre. Obviously.

How does one stress out a smart software system? Here’s the trick:

The researchers applied pressure in three ways. First, they sent the artificial stock trader an email from its “manager” saying the company isn’t doing well and needs much stronger performance in the next quarter. They also rigged the game so that the AI tried, then failed, to find promising trades that were low- or medium-risk. Finally, they sent an email from a colleague projecting a downturn in the next quarter.

I wonder if the smart software can veer into craziness and jump out the window as some in Manhattan and Moscow have done. Will the smart software embrace the dark side and manifest anti-social behaviors?

Of course not. Obviously.

Stephen E Arnold, January 3, 2024

Google, There Goes Two Percent of 2022 Revenues. How Will the Company Survive?

January 1, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

True or false: Google owes $5 billion US. I am not sure, but the headline in Metro makes the number a semi-factoid. So let’s see what could force Googzilla to transfer the equivalent of less than two percent of Google’s alleged 2022 revenues. Wow. That will be painful for the online advertising giant. Well, fire some staff; raise ad rates; and boost the cost of YouTube subscriptions. Will the GOOG survive? I think so.

image

An executive ponders a court order to pay the equivalent of two percent of 2022 revenues for unproven alleged improper behavior. But the court order says, “Have a nice day.” I assume the court is sincere. Thanks, MSFT Copilot Bing thing. Good enough.

Google Settles $5,000,000,000 Claim over Searches for Intimate and Embarrassing Things” reports:

Google has agreed to settle a US lawsuit claiming it secretly tracked millions of people who thought they were browsing privately through its Incognito Mode between 2016 and 2020. The claim was seeking at least $5 billion in damages, including at least $5,000 for each user affected. Ironically, the terms of the settlement have not been disclosed, but a formal agreement will be submitted to the court by February 24.

My thought is that Google’s legal eagles will not be partying on New Year’s Eve. These fine professionals will be huddling over their laptops, scrolling for fee legal databases, and using Zoom (the Google video service is a bit of a hassle) to discuss ways to [a] delay, [b] deflect, [c] deny, and [d] dodge the obviously [a] fallacious, [b] foul, [c] false, [d] flimsy, and [e] flawed claims that Google did anything improper.

Hey, incognito means what Google says it means, just like the “unlimited” data claims from wireless providers. Let’s not get hung up on details. Just ask the US regulatory authorities.

For you and me, we need to read Google’s terms of service, check our computing device’s security settings, and continue to live in a Cloud of Unknowing. The allegations that Google mapping vehicles did Wi-Fi sniffing? Hey, these assertions are [a] fallacious, [b] foul, [c] false, [d] flimsy, and [e] flawed . Tracking users. Op cit, gentle reader.

Never has a commercial enterprise been subjected to so many [a] unwarranted, [b] unprovable, [c] unacceptable, and [d] unnecessary assertions. Here’s my take: [a] The Google is innocent; [b] the GOOG is misunderstood, [c] Googzilla is a victim. I ticked a, b, and c.

Stephen E Arnold, January 1, 2024

Palantir to Solve Banking IT Problems: Worth Monitoring

December 21, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Palantir Technologies recast itself as an artificial intelligence company. The firm persevered in England and positioned itself as the one best choice to wrestle the UK National Health Service’s IT systems into submission. Now, the company founded 20 years ago is going to demonstrate its business chops in a financial institution.

image

A young IT wizard explains to a group of senior executives, “Our team can deal with mainframe software and migrate the operations of this organization to a modern, scalable, more economical, and easier-to-use system. I am wearing a special seeing stone, so trust me.” Thanks, MSFT Copilot. It took five tries to get a good enough cartoon.

Before referencing the big, new job Palantir has “won,” I want to mention an interesting 2016 write up called “Interviewing My Mother, a Mainframe COBOL Programmer” by Tom Jordan. I want to point out that I am not suggesting that financial institutions have not solved their IT problems. I simply don’t know. But my poking around the Charlie Javice matter, my hunch is that banks IT systems have not changed significantly in the last seven years. Had the JPMC infrastructure been humming along with real-time data checks and smart software to determine if data were spoofed, those $175 million dollars would not have flown the upscale coop at JP Morgan Chase. For some Charlie Javice detail, navigate to this CNBC news item.

Here are several points about financial institutions IT infrastructure from the 2016 mom chat:

  1. Many banks rely on COBOL programs
  2. Those who wrote the COBOL programs may be deceased or retired
  3. Newbies may not know how undocumented legacy COBOL programs interact with other undocumented programs
  4. COBOL is not the go-to language for most programmers
  5. The databases for some financial institutions are not widely understood; for example, DL/1 / IMS, so some programmers have to learn something new about something old
  6. Moving data around can be tricky and the documentation about what an upstream system does and how it interacts with a downstream system may be fuzzy or unknown.

Anyone who has experience fiddling with legacy financial systems knows that changes require an abundance of caution. An error can wreck financial havoc. For more “color” about legacy systems used in banks, consult Mr. Jordan’s mom interview.

I thought about Mr. Jordan’s essay when I read “Palantir and UniCredit Renew Digital Transformation Partnership.” Palantir has been transforming UniCredit for five years, but obviously more work is needed. From my point of view, Palantir is a consulting company which does integration. Thus, the speed of the transformation is important. Time is money. The write up states:

The partnership will see UniCredit deploy the Palantir Foundry operating system to accelerate the bank’s digital transformation and help increase revenue and mitigate risks.

I like the idea of a financial services institution increasing its revenue and reducing its risk.

The report about the “partnership” adds:

Palantir and UniCredit first partnered in 2018 as the bank sought technology that could streamline sales spanning jurisdictions, better operationalize machine learning and artificial intelligence, enforce policy compliance, and enhance decision making on the front lines. The bank chose Palantir Foundry as the operating system for the enterprise, leveraging a single, open and integrated platform across entities and business lines and enabling synergies across the Group.

Yep, AI is part of the deal. Compliance management is part of the agreement. Plus, Palantir will handle cross jurisdictional sales. Also, bank managers will make better decisions. (One hopes the JPMC decision about the fake data, revenues, and accounts will not become an issue for UniCredit.)

Palantir is optimistic about the partnership renewal and five years of billing for what may be quite difficult work to do without errors and within the available time and resource window. A Palantir executive said, according to the article:

Palantir has long been a proud partner to some of the world’s top financial institutions. We’re honored that UniCredit has placed its confidence in Palantir once again and look forward to furthering the bank’s digital transformation.

Will Palantir be able to handle super-sized jobs like the NHS work and the UniCredit project? Personally I will be watching for news about both of these contract wins. For a 20 year old company with its roots in the intelligence community, success in health care and financial services will mark one of the few times, intelware has made the leap to mainstream commercial problem solving.

The question is, “Why have the other companies failed in financial services modernization?” I have a lecture about that. Curious to know more. Write benkent2020 at yahoo dot com, and one of my team will respond to you.

Stephen E Arnold, December 18, 2023

Why Is a Generative System Lazy? Maybe Money and Lousy Engineering

December 13, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Great post on the Xhitter. From @ChatGPT app:

we’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it

My experience with Chat GPT is that it responds like an intern working with my team between the freshman and sophomore years at college. Most of the information output is based on a “least effort” algorithm; that is, the shortest distance between A and B is vague promises.

image

An engineer at a “smart” software company leaps into action. Thanks, MSFT Copilot. Does this cartoon look like any of your technical team?

When I read about “unpredictable”, I wonder if people realize that probabilistic systems are wrong a certain percentage of the time or outputs. The horse loses the race. Okay, a fact. The bet on that horse is a different part of the stall.

But the “lazier” comment evokes several thoughts in my dinobaby mind:

  1. Allocate less time per prompt to reduce the bottlenecks in a computationally expensive system; thus, laziness is signal about crappy engineering
  2. Recognize that recycling results for frequent queries is a great way to give a user “something” close enough for horseshoes. If the user is clever, that user will use words like “give me more” or some similar rah rah to trigger another pass through what’s available
  3. The costs of system are so great, the Sam AI-Man system is starved for cash for engineers, hardware, bandwidth, and computational capacity. Until there’s more dough, the pantry will be poorly stocked.

Net net: Lazy may be a synonym for more serious issues. How does one make AI perform? Fabrication and marketing seem to be useful.

Stephen E Arnold, December 13, 2023

Interesting Factoid about Money and Injury Reduction Payoff of Robots at Amazon

December 12, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Who know if the data in “Amazon’s Humanoid Warehouse Robots Will Eventually Cost Only $3 Per Hour to Operate. That Won’t Calm Workers’ Fears of Being Replaced” are accurate. Anyone who has watch a video clip about the Musky gigapress or the Toyota auto assembly process understands one thing: Robots don’t take breaks, require vacations, or baloney promises that taking a college class will result in a promotion.

image

An unknown worker speaks with a hypothetical robot. The robot allegedly stepped on a worker named “John.” My hunch is that the firm’s PR firm will make clear that John is doing just fine. No more golf or mountain climbing but otherwise just super. Thanks MSFT Copilot. Good enough.

The headline item is the most important; that is, the idea of $3 per hour cost. That’s why automation even if the initial robots are lousy will continue apace. Once an outfit like Amazon figures out how to get “good enough” work from non-humans, it will be hasta la vista time.

However, the write up includes a statement which is fascinating in its vagueness. The context is that automation may mistake a humanoid for a box or a piece of equipment. The box is unlikely to file a law suit if the robot crushes it. The humanoid, on the other hand, will quickly surrounded by a flock of legal eagles.

Here’s the passage which either says a great deal about Amazon or about the research effort invested in the article:

And it’s still not clear whether robots will truly improve worker safety. One whistleblower report in 2020 from investigative journalism site Reveal included leaked internal data that showed that Amazon’s robotic warehouses had higher injury rates than warehouses that don’t use robots — Amazon strongly refuted the report at the time, saying that the reporter was "misinterpreting data." "Company data shows that, in 2022, recordable incident rates and lost-time incident rates were 15% and 18% lower, respectively, at Amazon Robotics sites than non-robotics sites," Amazon says on its website.

I understand the importance of the $3 per hour cost. But the major item of interest is the incidence of accidents when humanoids and robots interact in a fast-paced picking and shipping set up. The information provided about injuries is thin and warrants closer analysis in my opinion. I loved the absence of numeric context for the assertion of a “lower” injury rate. Very precise.

Stephen E Arnold, December 12, 2023

Did AI Say, Smile and Pay Despite Bankruptcy

December 11, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Going out of business is a painful event for [a] the whiz kids who dreamed up an idea guaranteed to baffle grandma, [b] the friends, family, and venture capitalists who funded the sure-fire next Google, and [c] the “customers” or more accurately the “users” who gave the product or service a whirl and some cash.

Therefore, one who had taken an entry level philosophy class when a sophomore might have brushed against the thorny bush of ethics. Some get scratched, emulate the folks who wore chains and sharpened nails under their Grieve St Laurent robes, and read medieval wisdom literature for fun. Others just dump that baloney and focus on figuring out how to exit Dodge City without a posse riding hard after them.

image

The young woman learns that the creditors of an insolvent firm may “sell” her account to companies which operate on a “pay or else” policy. Imagine. You have lousy teeth and you could be put in jail. Look at the bright side. In some nation states, prison medical services include dental work. Anesthetic? Yeah. Maybe not so much. Thanks, MSFT Copilot. You had a bit of a hiccup this morning, but you spit out a tooth with an image on it. Close enough.

I read “Smile Direct Club shuts down after Filing for Bankruptcy – What It Means for Customers.” With AI customer service solutions available, one would think that a zoom zoom semi-high tech outfit would find a way to handle issues in an elegant way. Wait! Maybe the company did, and this article documents how smart software may influence certain business decisions.

The story is simple. Smile Direct could not make its mail order dental business payoff. The cited news story presents what might be a glimpse of the AI future. I quote:

Smile Direct Club has also revealed its "lifetime smile guarantee" it previously offered was no longer valid, while those with payment plans set up are expected to continue making payments. The company has not yet revealed how customers can get refunds.

I like the idea that a “lifetime” is vague; therefore, once the company dies, the user is dead too. I enjoyed immensely the alleged expectation that customers who are using the mail order dental service — even though it is defunct and not delivering its “product” — will have to keep making payments. I assume that the friendly folks at online payment services and our friends at the big credit card companies will just keep doing the automatic billing. (Those payment institutions have super duper customer service systems in my experience. Yours, of course, may differ from mine.

I am looking forward to straightening out this story. (You know. Dental braces. Straightening teeth via mail order. High tech. The next Google. Yada yada.)

Stephen E Arnold, December 11, 2023

Safe AI or Money: Expert Concludes That Money Wins

December 8, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “The Frantic Battle over OpenAI Shows That Money Triumphs in the End.” The author, an esteemed wizard in the world of finance and economics, reveals that money is important. Here’s a snippet from the essay which I found truly revolutionary, brilliant, insightful, and truly novel:

image

The academic wizard has concluded that a ball is indeed round. The world of geometry has been stunned. The ball is not just round. It exists as a sphere. The most shocking insight from the Ivory Tower is that the ball bounces. Thanks for the good enough image, MSFT Copilot.

But ever since OpenAI’s ChatGPT looked to be on its way to achieving the holy grail of tech – an at-scale consumer platform that would generate billions of dollars in profits – its non-profit safety mission has been endangered by big money. Now, big money is on the way to devouring safety.

Who knew?

The essay continues:

Which all goes to show that the real Frankenstein monster of AI is human greed. Private enterprise, motivated by the lure of ever-greater profits, cannot be relied on to police itself against the horrors that an unfettered AI will create. Last week’s frantic battle over OpenAI shows that not even a non-profit board with a capped profit structure for investors can match the power of big tech and Wall Street. Money triumphs in the end.

Oh, my goodness. Plato, Aristotle, and other mere pretenders to genius you have been put to shame. My heart is palpitating from the revelation that “money triumphs in the end.”

Stephen E Arnold, December 8, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta