Google Trial: An Interesting Comment Amid the Yada Yada

May 8, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read “Google’s Antitrust Trial Spotlights Search Ads on the Final Day of Closing Arguments.” After decades of just collecting Google tchotchkes, US regulators appear to be making some progress. It is very difficult to determine if a company is a monopoly. It was much easier to count barrels of oil, billets of steel, and railroad cars than digital nothingness, wasn’t it?

image

A giant whose name is Googzilla has most of the toys. He is reminding those who want the toys about his true nature. I believe Googzilla. Do you? Thanks, Microsoft Copilot. Good enough.

One of the many reports of the Google monopoly legal activity finally provided to me a quite useful, clear statement. Here’s the passage which caught my eye:

a coalition of state attorneys said Google’s search advertising business has trapped advertisers into its ecosystem while higher ad prices haven’t led to higher returns.

I want to consider this assertion. Please, read the original write up on Digiday to get the “real” news report. I am not a journalist; I am a dinobaby, and I have some thoughts to capture.

First, the Google has been doing Googley things for about a quarter of a century. A bit longer if one counts the Backrub service in an estimable Stanford computer building. From my point of view, Google has been doing “clever.” That means to just apologize, not ask permission. That means seek inspiration from others; for example, the IBM Clever system, the Yahoo-Overture advertising system, and the use of free to gain access to certain content like books, and pretty much doing what it wants. After figuring out that Google had to make money, it “innovated” with advertising, paid a fine, and acquired people and technology to match ads to queries. Yep, Oingo (Applied Semantics) helped out. The current antitrust matter will be winding down in 2024 and probably drag through 2025. Appeals for a company with lots of money can go slowly. Meanwhile Google’s activity can go faster.

Second, the data about Google monopoly are not difficult to identify. There is the state of the search market. Well, Eric Schmidt said years ago, Qwant kept him awake at night. I am not sure that was a credible statement. If Mr. Schmidt were awake at night, it might be the result of thinking about serious matters like money. His money. When Google became widely available, there were other Web search engines. I posted a list on my Web site which had a couple of hundred entries. Now the hot new search engines just recycle Bing and open source indexes, tossing in a handful of “special” sources like my mother jazzing up potato salad. There is Google search. And because of the reach of Google search, Google can sell ads.

Third, the ads are not just for search. Any click on a Google service is a click. Due to cute tricks like Chrome and ubiquitous services like maps, Google can slap ads many place. Other outfits cannot unless they are Google “partners.” Those partners are Google’s sales force. SEO customers become buyers of Google ads because that’s the most effective way to get traffic. Does a small business owner expect a Web site to be “found” without Google Local and maybe some advertising juice. Nope. No one but OSINT experts can get Google search to deliver useful results. Google Dorks exists for a reason. Google search quality drives ad sales. And YouTube ads? Lots of ads. Want an alternative? Good luck with Facebook, TikTok, ok.ru, or some other service.

Where’s the trial now? Google has asserted that it does not understand its own technology. The judge says he is circling down the drain of the marketing funnel. But the US government depends on the Google. That may be a factor or just the shadow of Googzilla.

Stephen E Arnold, May 8, 2024

A Look at Several Cyber Busts of 2023

May 8, 2024

Curious about cybercrime and punishment? Darknet data firm DarkOwl gives us a good run down of selective take downs in its blog post, “Cybercriminal Arrests and Disruptions: 2023 Look Back.” The post asserts law enforcement is getting more proactive about finding and disrupting hackers. (Whether that improvement is keeping pace with the growth of hacking is another matter.) We are given seven high-profile examples.

First was the FBI’s takedown of New York State’s Conor Fitzpatrick, admin of the dark web trading post BreachForums. Unfortunately, the site was back up and running in no time under Fitzpatrick’s partner. The FBI seems to have had more success disrupting the Hive Ransomware group, seizing assets and delivering decryption keys to victims. Europol similarly disrupted the Ragnar Locker Ransomware group and even arrested two key individuals. Then there were a couple of kids from the Lapsus$ Gang. Literally, these hackers were UK teenagers responsible for millions of dollars worth of damage and leaked data. See the write-up for more details on these and three other 2023 cases. The post concludes:

“Only some of the law enforcement action that took place in 2023 are described in this blog. Law enforcement are becoming more and more successful in their operations against cybercriminals both in terms of arrests and seizure of infrastructure – including on the dark web. However, events this year (2024) have already shown that some law enforcement action is not enough to take down groups, particularly ransomware groups. Notable activity against BlackCat/ALPHV and LockBit have shown to only take the groups out for a matter of days, when no arrests take place. BlackCat are reported to have recently conducted an exit scam after a high-profile ransomware was paid, and Lockbit seem intent on revenge after their recent skirmish with the law. It is unlikely that law enforcement will be able to eradicate cybercrime and the game whack-a-mole will continue. However, the events of 2023 show that the law enforcement bodies globally are taking action and standing up to the criminals creating dire consequences for some, which will hopefully deter future threat actors.”

One can hope.

Cynthia Murrell, May 8, 2024

Google Stomps into the Threat Intelligence Sector: AI and More

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Before commenting on Google’s threat services news. I want to remind you of the link to the list of Google initiatives which did not survive. You can find the list at Killed by Google. I want to mention this resource because Google’s product innovation and management methods are interesting to say the least. Operating in Code Red or Yellow Alert or whatever the Google crisis buzzword is, generating sustainable revenue beyond online advertising has proven to be a bit of a challenge. Google is more comfortable using such methods as [a] buying and trying to scale it, [b] imitating another firm’s innovation, and [c] dumping big money into secret projects in the hopes that what comes out will not result in the firm’s getting its “glass” kicked to the curb.

image

Google makes a big entrance at the RSA Conference. Thanks, MSFT Copilot. Have you considerate purchasing Google’s threat intelligence service?

With that as background, Google has introduced an “unmatched” cyber security service. The information was described at the RSA security conference and in a quite Googley blog post “Introducing Google Threat Intelligence: Actionable threat intelligence at Google Scale.” Please, note the operative word “scale.” If the service does not make money, Google will “not put wood behind” the effort. People won’t work on the project, and it will be left to dangle in the wind or just shot like Cricket, a now famous example of animal husbandry. (Google’s Cricket was the Google Appliance. Remember that? Take over the enterprise search market. Nope. Bang, hasta la vista.)

Google’s new service aims squarely at the comparatively well-established and now maturing cyber security market. I have to check to see who owns what. Venture firms and others with money have been buying promising cyber security firms. Google owned a piece of Recorded Future. Now Recorded Future is owned by a third party outfit called Insight. Darktrace has been or will be purchased by Thoma Bravo. Consolidation is underway. Thus, it makes sense to Google to enter the threat intelligence market, using its Mandiant unit as a springboard, one of those home diving boards, not the cliff in Acapulco diving platform.

The write up says:

we are announcing Google Threat Intelligence, a new offering that combines the unmatched depth of our Mandiant frontline expertise, the global reach of the VirusTotal community, and the breadth of visibility only Google can deliver, based on billions of signals across devices and emails. Google Threat Intelligence includes Gemini in Threat Intelligence, our AI-powered agent that provides conversational search across our vast repository of threat intelligence, enabling customers to gain insights and protect themselves from threats faster than ever before.

Google to its credit did not trot out the “quantum supremacy” lingo, but the marketers did assert that the service offers “unmatched visibility in threats.” I like the “unmatched.” Not supreme, just unmatched. The graphic below illustrates the elements of the unmatchedness:

image

Credit to the Google 2024

But where is artificial intelligence in the diagram? Don’t worry. The blog explains that Gemini (Google’s AI “system”) delivers

AI-driven operationalization

But the foundation of the new service is Gemini, which does not appear in the diagram. That does not matter, the Code Red crowd explains:

Gemini 1.5 Pro offers the world’s longest context window, with support for up to 1 million tokens. It can dramatically simplify the technical and labor-intensive process of reverse engineering malware — one of the most advanced malware-analysis techniques available to cybersecurity professionals. In fact, it was able to process the entire decompiled code of the malware file for WannaCry in a single pass, taking 34 seconds to deliver its analysis and identify the kill switch. We also offer a Gemini-driven entity extraction tool to automate data fusion and enrichment. It can automatically crawl the web for relevant open source intelligence (OSINT), and classify online industry threat reporting. It then converts this information to knowledge collections, with corresponding hunting and response packs pulled from motivations, targets, tactics, techniques, and procedures (TTPs), actors, toolkits, and Indicators of Compromise (IoCs). Google Threat Intelligence can distill more than a decade of threat reports to produce comprehensive, custom summaries in seconds.

I like the “indicators of compromise.”

Several observations:

  1. Will this service be another Google Appliance-type play for the enterprise market? It is too soon to tell, but with the pressure mounting from regulators, staff management issues, competitors, and savvy marketers in Redmond “indicators” of success will be known in the next six to 12 months
  2. Is this a business or just another item on a punch list? The answer to the question may be provided by what the established players in the threat intelligence market do and what actions Amazon and Microsoft take. Is a new round of big money acquisitions going to begin?
  3. Will enterprise customers “just buy Google”? Chief security officers have demonstrated that buying multiple security systems is a “safe” approach to a job which is difficult: Protecting their employers from deeply flawed software and years of ignoring online security.

Net net: In a maturing market, three factors may signal how the big, new Google service will develop. These are [a] price, [b] perceived efficacy, and [c] avoidance of a major issue like the SolarWinds’ matter. I am rooting for Googzilla, but I still wonder why Google shifted from Recorded Future to acquisitions and me-too methods. Oh, well. I am a dinobaby and cannot be expected to understand.

Stephen E Arnold, May 7, 2024

Buffeting AI: A Dinobaby Is Nervous

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am not sure the “go fast” folks are going to be thrilled with a dinobaby rich guy’s view of smart software. I read “Warren Buffett’s Warning about AI.” The write up included several interesting observations. The only problem is that smart software is out of the bag. Outfits like Meta are pushing the open source AI ball forward. Other outfits are pushing, but Meta has big bucks. Big bucks matter in AI Land.

image

Yes, dinobaby. You are on the right wavelength. Do you think anyone will listen? I don’t. Thanks, MSFT Copilot. Keep up the good work on security.

Let’s look at a handful of statements from the write up and do some observing while some in the Commonwealth of Kentucky recover from the Derby.

First, the oracle of Omaha allegedly said:

“When you think about the potential for scamming people… Scamming has always been part of the American scene. If I was interested in investing in scamming— it’s gonna be the growth industry of all time.”

Mr. Buffet has nailed the scamming angle. I particularly liked the “always.” Imagine a country built upon scamming. That makes one feel warm and fuzzy about America. Imagine how those who are hostile to US interests interpret the comment. Ill will toward the US can now be based on the premise that “scamming has always been part of the American scene.” Trust us? Just ignore the oracle of Omaha? Unlikely.

Second, the wise, frugal icon allegedly communicated that:

the technology would affect “anything that’s labor sensitive” and that for workers it could “create an enormous amount of leisure time.”

What will those individuals do with that “leisure time”? Gobbling down social media? Working on volunteer projects like picking up trash from streets and highways?

The final item I will cite is his 2018 statement:

“Cyber is uncharted territory. It’s going to get worse, not better.”

Is that a bit negative?

Stephen E Arnold, May 7, 2024

The Everything About AI Report

May 7, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I read the Stanford Artificial Intelligence Report. If you have have not seen the 500 page document, click here.  I spotted an interesting summary of the document. “Things Everyone Should Understand About the Stanford AI Index Report” is the work of Logan Thorneloe, an author previously unknown to me. I want to highlight three points I carried away from Mr. Thorneloe’s essay. These may make more sense after you have worked through the beefy Stanford document, which, due to its size, makes clear that Stanford wants to be linked to the the AI spaceship. (Does Stanford’s AI effort look like Mr. Musk’s or Mr. Bezos’ rocket? I am leaning toward the Bezos design.)

image

An amazed student absorbs information about the Stanford AI Index Report. Thanks, MSFT. Good enough.

The summary of the 500 page document makes clear that Stanford wants to track the progress of smart software, provide a policy document so that Stanford can obviously influence policy decisions made by people who are not AI experts, and then “highlight ethical considerations.” The assumption by Mr. Thorneloe and by the AI report itself is that Stanford is equipped to make ethical anything. The president of Stanford departed under a cloud for acting in an unethical manner. Plus some of the AI firms have a number of Stanford graduates on their AI teams. Are those teams responsible for depictions of inaccurate historical personages? Okay, that’s enough about ethics. My hunch is that Stanford wants to be perceived as a leader. Mr. Thorneloe seems to accept this idea as a-okay.

The second point for me in the summary is that Mr. Thorneloe goes along with the idea that the Stanford report is unbiased. Writing about AI is, in my opinion of course, inherently biased. That’s’ the reason there are AI cheerleaders and AI doomsayers. AI is probability. How the software gets smart is biased by [a] how the thresholds are rigged up when a smart system is built, [b] the humans who do the training of the system and then “fine tune” or “calibrate” the smart software to produce acceptable results, and [b] the information used to train the system. More recently, human developers have been creating wrappers which effectively prevent the smart software from generating pornography or other “improper” or “unacceptable” outputs. I think the “bias” angle needs some critical thinking. Stanford’s report wants to cover the AI waterfront as Stanford maps and presents the geography of AI.

The final point is the rundown of Mr. Thorneloe’s take-aways from the report. He presents ten. I think there may just be three. First, the AI work is very expensive. That leads to the conclusion that only certain firms can be in the AI game and expect to win and win big. To me, this means that Stanford wants the good old days of Silicon Valley to come back again. I am not sure that this approach to an important, yet immature technology, is a particularly good idea. One does not fix up problems with technology. Technology creates some problems, and like social media, what AI generates may have a dark side. With big money controlling the game, what’s that mean? That’s a tough question to answer. The US wants China and Russia to promise not to use AI in their nuclear weapons system. Yeah, that will work.

Another take-away which seems important is the assumption that workers will be more productive. This is an interesting assertion. I understand that one can use AI to eliminate call centers. However, has Stanford made a case that the benefits outweigh the drawbacks of AI? Mr. Thorneloe seems to be okay with the assumption underlying the good old consultant-type of magic.

The general take-away from the list of ten take-aways is that AI is fueled by “industry.” What happened the Stanford Artificial Intelligence Lab, synthetic data, and the high-confidence outputs? Nothing has happened. AI hallucinates. AI gets facts wrong. AI is a collection of technologies looking for problems to solve.

Net net: Mr. Thorneloe’s summary is useful. The Stanford report is useful. Some AI is useful. Writing 500 pages about a fast moving collection of technologies is interesting. I cannot wait for the 2024 edition. I assume “everyone” will understand AI PR.

Stephen E Arnold, May 7, 2024

Torrent Search Platform Tribler Works to Boost Decentralization with AI

May 7, 2024

Can AI be the key to a decentralized Internet? The group behind the BitTorrent-based search engine Tribler believe it can. TorrentFreak reports, “Researchers Showcase Decentralized AI-Powered Torrent Search Engine.” Even as the online world has mostly narrowed into commercially controlled platforms, researchers at the Netherlands’ Delft University of Technology have worked to decentralize and anonymize search. Their goal has always been to empower John Q. Public over governments and corporations. Now, the team has demonstrated the potential of AI to significantly boost those efforts. Writer Ernesto Van der Sal tells us:

“Tribler has just released a new paper and a proof of concept which they see as a turning point for decentralized AI implementations; one that has a direct BitTorrent link. The scientific paper proposes a new framework titled ‘De-DSI’, which stands for Decentralised Differentiable Search Index. Without going into technical details, this essentially combines decentralized large language models (LLMs), which can be stored by peers, with decentralized search. This means that people can use decentralized AI-powered search to find content in a pool of information that’s stored across peers. For example, one can ask ‘find a magnet link for the Pirate Bay documentary,’ which should return a magnet link for TPB-AFK, without mentioning it by name. This entire process relies on information shared by users. There are no central servers involved at all, making it impossible for outsiders to control.”

Van der Sal emphasizes De-DSI is still in its early stages—the demo was created with a limited dataset and starter AI capabilities. The write-up briefly summarizes the approach:

“In essence, De-DSI operates by sharing the workload of training large language models on lists of document identifiers. Every peer in the network specializes in a subset of data, which other peers in the network can retrieve to come up with the best search result.”

The team hopes to incorporate this tech into an experimental version of Tribler by the end of this year. Stay tuned.

Cynthia Murrell, May 7, 2024

Microsoft Security Messaging: Which Is What?

May 6, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

I am a dinobaby. I am easily confused. I read two “real” news items and came away confused. The first story is “Microsoft Overhaul Treats Security As Top Priority after a Series of Failures.” The subtitle is interesting too because it links “security” to monetary compensation. That’s an incentive, but why isn’t security just part of work at an alleged monopoly’s products and services? I surmise the answer is, “Because security costs money, a lot of money.” That article asserts:

After a scathing report from the US Cyber Safety Review Board recently concluded that “Microsoft’s security culture was inadequate and requires an overhaul,” it’s doing just that by outlining a set of security principles and goals that are tied to compensation packages for Microsoft’s senior leadership team.

Okay. But security emerges from basic engineering decisions; for instance, does a developer spend time figuring out and resolving security when dependencies are unknown or documented only by a grousing user in a comment posted on a technical forum? Or, does the developer include a new feature and moves on to the next task, assuming that someone else or an automated process will make sure everything works without opening the door to the curious bad actor? I think that Microsoft assumes it deploys secure systems and that its customers have the responsibility to ensure their systems’ security.

image

The cyber racoons found the secure picnic basket was easily opened. The well-fed, previously content humans seem dismayed that their goodies were stolen. Thanks, MSFT Copilot. Definitely good enough.

The write up adds that Microsoft has three security principles and six security pillars. I won’t list these because the words chosen strike me like those produced by a lawyer, an MBA, and a large language model. Remember. I am a dinobaby. Six plus three is nine things. Some car executive said a long time ago, “Two objectives is no objective.” I would add nine generalizations are not a culture of security. Nine is like Microsoft Word features. No one can keep track of them because most users use Word to produce Words. The other stuff is usually confusing, in the way, or presented in a way that finding a specific feature is an exercise in frustration. Is Word secure? Sure, just download some nifty documents from a frisky Telegram group or the Dark Web.

The write up concludes with a weird statement. Let me quote it:

I reported last month that inside Microsoft there is concern that the recent security attacks could seriously undermine trust in the company. “Ultimately, Microsoft runs on trust and this trust must be earned and maintained,” says Bell. “As a global provider of software, infrastructure and cloud services, we feel a deep responsibility to do our part to keep the world safe and secure. Our promise is to continually improve and adapt to the evolving needs of cybersecurity. This is job #1 for us.”

First, there is the notion of trust. Perhaps Edge’s persistence and advertising in the start menu, SolarWinds, and the legions of Chinese and Russian bad actors undermine whatever trust exists. Most users are clueless about security issues baked into certain systems. They assume; they don’t trust. Cyber security professionals buy third party security solutions like shopping at a grocery store. Big companies’ senior executive don’t understand why the problem exists. Lawyers and accountants understand many things. Digital security is often not a core competency. “Let the cloud handle it,” sounds pretty good when the fourth IT manager or the third security officer quit this year.

Now the second write up. “Microsoft’s Responsible AI Chief Worries about the Open Web.” First, recall that Microsoft owns GitHub, a very convenient source for individuals looking to perform interesting tasks. Some are good tasks like snagging a script to perform a specific function for a church’s database. Other software does interesting things in order to help a user shore up security. Rapid 7 metasploit-framework is an interesting example. Almost anyone can find quite a bit of useful software on GitHub. When I lectured in a central European country’s main technical university, the students were familiar with GitHub. Oh, boy, were they.

In this second write up I learned that Microsoft has released a 39 page “report” which looks a lot like a PowerPoint presentation created by a blue-chip consulting firm. You can download the document at this link, at least you could as of May 6, 2024. “Security” appears 78 times in the document. There are “security reviews.” There is “cybersecurity development” and a reference to something called “Our Aether Security Engineering Guidance.” There is “red teaming” for biosecurity and cybersecurity. There is security in Azure AI. There are security reviews. There is the use of Copilot for security. There is something called PyRIT which “enables security professionals and machine learning engineers to proactively find risks in their generative applications.” There is partnering with MITRE for security guidance. And there are four footnotes to the document about security.

What strikes me is that security is definitely a popular concept in the document. But the principles and pillars apparently require AI context. As I worked through the PowerPoint, I formed the opinion that a committee worked with a small group of wordsmiths and crafted a rather elaborate word salad about going all in with Microsoft AI. Then the group added “security” the way my mother would chop up a red pepper and put it in a salad for color.

I want to offer several observations:

  1. Both documents suggest to me that Microsoft is now pushing “security” as Job One, a slogan used by the Ford Motor Co. (How are those Fords fairing in the reliability ratings?) Saying words and doing are two different things.
  2. The rhetoric of the two documents remind me of Gertrude’s statement, “The lady doth protest too much, methinks.” (Hamlet? Remember?)
  3. The US government, most large organizations, and many individuals “assume” that Microsoft has taken security seriously for decades. The jargon-and-blather PowerPoint make clear that Microsoft is trying to find a nice way to say, “We are saying we will do better already. Just listen, people.”

Net net: Bandying about the word trust or the word security puts everyone on notice that Microsoft knows it has a security problem. But the key point is that bad actors know it, exploit the security issues, and believe that Microsoft software and services will be a reliable source of opportunity of mischief. Ransomware? Absolutely. Exposed data? You bet your life. Free hacking tools? Let’s go. Does Microsoft have a security problem? The word form is incorrect. Does Microsoft have security problems? You know the answer. Aether.

Stephen E Arnold, May 6, 2024

Reflecting on the Value Loss from a Security Failure

May 6, 2024

dinosaur30a_thumb_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Right after the October 2023 security lapse in Israel, I commented to one of the founders of a next-generation Israeli intelware developer, “Quite a security failure.” The response was, “It is Israel’s 9/11.” One of the questions that kept coming to my mind was, “How could such sophisticated intelligence systems, software, and personnel have dropped the ball?” I have arrived at an answer: Belief in the infallibility of in situ systems. Now I am thinking about the cost of a large-scale security lapse.

image

It seems the young workers are surprised the security systems did not work. Thanks, MSFT Copilot. Good enough which may be similar to some firms’ security engineering.

Globes published “Big Tech 50 Reveals Sharp Falls in Israeli Startup Valuations.” The write up provides some insight into the business cost of security which did not live up to its marketing. The write up says:

The Israeli R&D partnership has reported to the TASE [Tel Aviv Stock Exchange] that 10 of the 14 startups in which it has invested have seen their valuations decline.

Interesting.

What strikes me is that the cost of a security lapse is obviously personal and financial. One of the downstream consequences is a loss of confidence or credibility. Israel’s hardware and software security companies have had, in my opinion, a visible presence at conferences addressing specialized systems and software. The marketing of the capabilities of these systems has been maturing and becoming more like Madison Avenue efforts.

I am not sure which is worse: The loss of “value” or the loss of “credibility.”

If we transport the question about the cost of a security lapse to large US high-technology company, I am not sure a Globes’ type of article captures the impact. Frankly, US companies suffer security issues on a regular basis. Only a few make headlines. And then the firms responsible for the hardware or software which are vulnerable because of poor security issue a news release, provide a software update, and move on.

Several observations:

  1. The glittering generalities about the security of widely used hardware and software is simply out of step with reality
  2. Vendors of specialized software such as intelware suggest that their systems provide “protection” or “warnings” about issues so that damage is minimized. I am not sure I can trust these statements.
  3. The customers, who may have made security configuration errors, have the responsibility to set up the systems, update, and have trained personnel operate them. That sounds great, but it is simply not going to happen. Customers are assuming what they purchase is secure.

Net net: The cost of security failure is enormous: Loss of life, financial disaster, and undermining the trust between vendor and customer. Perhaps some large outfits should take the security of the products and services they offer beyond a meeting with a PR firm, a crisis management company, or a go-go marketing firm? The “value” of security is high, but it is much more than a flashy booth, glib presentations at conferences, or a procurement team assuming what vendors present correlates with real world deployment.

Stephen E Arnold, May 6, 2024

Generative AI Means Big Money…Maybe

May 6, 2024

Whenever new technology appears on the horizon, there are always optimistic, venture capitalists that jump on the idea that it will be a gold mine. While this is occasionally true, other times it’s a bust. Anything can sound feasible on paper, but reality often proves that brilliant ideas don’t work. Medium published Ashish Karan’s article, “Generative AI: A New Gold Rush For Software Engineering.”

Kakran opens his article asserting the brilliant simplicity of Einstein’s E=mc² formula to inspire readers. He alludes that generative AI will revolutionize industries like Einstein’s formula changed physics. He also says that white collar jobs stand to be automated for the first time in history. White collar jobs have been automated or made obsolete for centuries.

Kakran then runs numbers complete with charts and explanations about how generative AI is going to change the world. His diagrams and explanations probably mean something but it reads like white paper gibberish. This part makes sense:

“If you rewind to the year 2008, you will suddenly hear a lot of skepticism about the cloud. Would it ever make sense to move your apps and data from private or colo [cated] data centers to cloud thereby losing fine-grained control. But the development of multi-cloud and devops technologies made it possible for enterprises to not only feel comfortable but accelerate their move to the cloud. Generative AI today might be comparable to cloud in 2008. It means a lot of innovative large companies are still to be founded. For founders, this is an enormous opportunity to create impactful products as the entire stack is currently getting built.”

The author is correct that are business opportunities to leverage generative AI. Is it a California gold rush? Nobody knows. If you have the funding, expertise, and a good idea then follow it. If not, maybe focusing on a more attainable career is better.

Whitey Grace, May 6, 2024

Microsoft: Security Debt and a Cooked Goose

May 3, 2024

dinosaur30a_thumbThis essay is the work of a dinobaby. Unlike some folks, no smart software improved my native ineptness.

Microsoft has a deputy security officer. Who is it? For reasons of security, I don’t know. What I do know is that our test VPNs no longer work. That’s a good way to enforce reduced security: Just break Windows 11. (Oh, the pushed messages work just fine.)

image

Is Microsoft’s security goose cooked? Thanks, MSFT Copilot. Keep following your security recipe.

I read “At Microsoft, Years of Security Debt Come Crashing Down.” The idea is that technical debt has little hidden chambers, in this case, security debt. The write up says:

…negligence, misguided investments and hubris have left the enterprise giant on its back foot.

How has Microsoft responded? Great financial report and this type of news:

… in early April, the federal Cyber Safety Review Board released a long-anticipated report which showed the company failed to prevent a massive 2023 hack of its Microsoft Exchange Online environment. The hack by a People’s Republic of China-linked espionage actor led to the theft of 60,000 State Department emails and gained access to other high-profile officials.

Bad? Not as bad as this reminder that there are some concerning issues

What is interesting is that big outfits, government agencies, and start ups just use Windows. It’s ubiquitous, relatively cheap, and good enough. Apple’s software is fine, but it is different. Linux has its fans, but it is work. Therefore, hello Windows and Microsoft.

The article states:

Just weeks ago, the Cybersecurity and Infrastructure Security Agency issued an emergency directive, which orders federal civilian agencies to mitigate vulnerabilities in their networks, analyze the content of stolen emails, reset credentials and take additional steps to secure Microsoft Azure accounts.

The problem is that Microsoft has been successful in becoming for many government and commercial entities the only game in town. This warrants several observations:

  1. The Microsoft software ecosystem may be impossible to secure due to its size and complexity
  2. Government entities from America to Zimbabwe find the software “good enough”
  3. Security — despite the chit chat — is expensive and often given cursory attention by system architects, programmers, and clients.

The hope is that smart software will identify, mitigate, and choke off the cyber threats. At cyber security conferences, I wonder if the attendees are paying attention to Emily Dickinson (the sporty nun of Amherst), who wrote:

Hope is the thing with feathers
That perches in the soul
And sings the tune without the words
And never stops at all.

My thought is that more than hope may be necessary. Hope in AI is the cute security trick of the day. Instead of a happy bird, we may end up with a cooked goose.

Stephen E Arnold, May 3, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta