One Half of the Sundar & Prabhakar Act Gets Egged: Garrf.

April 30, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

After I wrote Google Version 2: The Calculating Predator, BearStearns bought the rights to portions of my research and published one of its analyst reports. In that report, a point was made about Google’s research into semantic search. Remember, this was in 2005, long before the AI balloon inflated to the size of Taylor Swift’s piggy bank. My client (whom I am not allowed to name) and I were in the Manhattan BearStearns’ office. We received a call from Prabhakar Raghavan, who was the senior technology something at Yahoo at that time. I knew of Dr. Raghavan because he had been part of the Verity search outfit. On that call, Dr. Raghavan was annoyed that BearStearns suggested Yahoo was behind the eight ball in Web search. We listened, and I pointed out that Yahoo was not matching Google’s patent filing numbers. Although not an indicator of innovation, it is one indicator. The Yahoo race car had sputtered and had lost the search race. I recall one statement Dr. Raghavan uttered, “I can do a better search engine for $300,000 dollars.” Well, I am still waiting. Dr. Raghavan may have an opportunity to find his future elsewhere if he continues to get the type of improvised biographical explosive device shoved under his office door at Google. I want to point out that I thought Dr. Raghavan’s estimate of the cost of search was a hoot. How could he beat that for a joke worthy of Jack Benny?

image

A big dumb bunny gets egged. Thanks, MSFT Copilot. Good enough.

I am referring to “The Man Who Killed Google Search,” written by Edward Zitron. For those to whom Mr. Zitron is not a household name like Febreze air freshener, he is “the CEO of national Media Relations and Public Relations company EZPR, of which I am both the E (Ed) and the Z (Zitron). I host the Better Offline Podcast, coming to iHeartRadio and everywhere else you find your podcasts February 2024.” For more about Mr. Zitron, navigate to this link. (Yep, it takes quite a while to load, but be patient.)

The main point of the write up is that the McKinsey-experienced Sundar Pichai (the other half of the comedy act) hired the article-writing, Verity-seasoned Dr. Raghavan to help steer the finely-crafted corporate aircraft carrier, USS Google into the Sea of Money. Even though, the duo are not very good at comedy, they are doing a bang up job of making the creaking online advertising machine output big money. If you don’t know how big, just check out the earning for the most recent financial quarter at this link. If you don’t want to wade through Silicon Valley jargon, Google is “a two trillion dollar company.” How do you like that, Mr. and Mrs. Traditional Advertising?

The write up is filled with proper names of Googlers past and present. The point is that the comedy duo dumped some individuals who embraced the ethos of the old, engineering-oriented, relevant search results Google. The vacancies were filled with those who could shove more advertising into what once were clean, reasonably well-lighted places. At the same time, carpetland (my term for the executive corridor down which Messrs. Brin and Page once steered their Segways) elevated above the wonky world of the engineers, the programmers, the Ivory Tower thinker types, and outright wonkiness of the advanced research units. (Yes, there were many at one time.)

Using the thought processes of McKinsey (the opioid idea folks) and the elocutionary skills of Dr. Raghavan, Google search degraded while the money continued to flow. The story presented by Mr. Zitron is interesting. I will leave it to you to internalize it and thank your luck stars you are not given the biographical improvised explosive device as a seat cushion. Yowzah.

Several observations:

  1. I am not sure the Sundar & Prabhakar duo wrote the script for the Death of Google Search. Believe me, there were other folks in Google carpetland aiding the process. How about a baby maker in the legal department as an example of ground principles? What about an attempted suicide by a senior senior senior manager’s squeeze? What about a big time thinker’s untimely demise as a result of narcotics administered by a rental female?
  2. The problems at Google are a result of decades of high school science club members acting out their visions of themselves as masters of the universe and a desire to rig the game so money flowed. Cleverness, cute tricks, and owning the casino and the hotel and the parking lot were part of Google’s version of Hotel California. The business set up was money in, fancy dancing in public, and nerdland inside. Management? Hey, math is hard. Managing is zippo.
  3. The competitive arena was not set up for a disruptor like the Google. I do not want to catalog what the company did to capture what appears to be a very good market position in online advertising. After a quarter century, the idea that Google might be an alleged monopoly is getting some attention. But alleged is one thing; change is another.
  4. The innovator’s dilemma has arrived in the lair of Googzilla. After inventing tensors, OpenAI made something snazzy with them and cut a deal with Microsoft. The result was the AI hyper moment with Google viewed as a loser. Forget the money. Google is not able to respond, some said. Perception is important. The PR gaffe in Paris where Dr. Prabhakar showed off Bard outputting incorrect information; the protests and arrests of staff; and the laundry list of allegations about the company’s business practices in the EU are compounding the one really big problem — Google’s ability to control its costs. Imagine. A corporate grunt sport could be the hidden disease. Is Googzilla clear headed or addled? Time will tell I believe.

Net net: The man who killed Google is just an clueless accomplice, not the wizard with the death ray cooking the goose and its eggs. Ultimately, in my opinion, we have to blame the people who use Google products and services, rely on Google advertising, and trust search results. Okay, Dr. Raghavan, suspended sentence. Now you can go build your $300,000 Web search engine. I will be available to evaluate it as I did Search2, Neeva, and the other attempts to build a better Google. Can you do it? Sure, you will be a Xoogler. Xooglers can do anything. Just look at Mr. Brin’s airship. And that egg will wash off unlike that crazy idea to charge Verity customers for each entry in an index passed for each user’s query. And that’s the joke that’s funnier than the Paris bollocksing of smart software. Taxi meter pricing for an in-house, enterprise search system. That is truly hilarious.

Stephen E Arnold, April 30, 2024

The Google Explains the Future of the Google Cloud: Very Googley, Of Course

April 30, 2024

green-dino_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

At its recent Next 24 conference, Google Cloud and associates shared their visions for the immediate future of AI. Through the event’s obscurely named Session Library, one can watch hundreds of sessions and access resources connected to many more. The idea — if you  have not caught on to the Googley nomenclature — is to make available videos of the talks at the conference. To narrow, one can filter by session category, conference track, learning level, solution, industry, topic of interest, and whether video is available. Keep in mind that the words you (a normal human, I presume) may use to communicate your interest may not be the lingo Googzilla speaks. AI and Machine Learning feature prominently. Other key areas include data and databases, security, development and architecture, productivity, and revenue growth (naturally). There is even a considerable nod to diversity, equity, and inclusion (DEI). Okay, nod, nod.

Here are a few session titles from just the “AI and ML” track to illustrate the scope of this event and the available information:

  • A cybersecurity expert’s guide to securing AI products with Google SAIF
  • AI for banking: Streamline core banking services and personalize customer experiences
  • AI for manufacturing: Enhance productivity and build innovative new business models
  • AI for telecommunications: Transform customer interactions and network operations
  • AI in capital markets: The biggest bets in the industry
  • Accelerate software delivery with Gemini and Code Transformations
  • Revolutionizing healthcare with AI
  • Streamlining access to youth mental health services

It looks like there is something for everybody. We think the titles make reasonably clear the scope and bigness of Google’s aspirations. Nor would we expect less from a $2 trillion outfit based on advertising, would we? Run a query for Code Red or in Google lingo CodeRED, and you will be surprised that the state of emergency, Microsoft is a PR king mentality persists. (Is this the McKinsey way?) Well, not for those employed at McKinsey. Former McKinsey professionals have more latitude in their management methods; for example, emulating high school science club planning techniques. There are no sessions we could spot about Google’s competition. If one is big enough, there is no competition. One of Googzilla’s relatives made a mess of Tokyo real estate largely without lasting consequences.

Cynthia Murrell, April 30, 2024

NSO Pegasus: No Longer Flying Below the Radar

April 29, 2024

green-dino_thumb_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “AP Exclusive: Polish Opposition Senator Hacked with Spyware.” I remain fearful of quoting the AP or Associated Press. I think it is a good business move to have an 89 year old terrified of an American “institution, don’t you. I think I am okay if I tell you the AP recycled a report from the University of Toronto’s Citizen Lab. Once again, the researchers have documented the use of what I call “intelware” by a nation state. The AP and other “real” news outfits prefer the term “spyware.” I think it has more sizzle, but I am going to put NSO Group’s mobile phone system and method in the category of intelware. The reason is that specialized software like Pegasus gathers information for a nation’s intelligence entities. Well, that’s the theory. The companies producing these platforms and tools want to answer such questions as “Who is going to undermine our interests?” or “What’s the next kinetic action directed at our facilities?” or “Who is involved in money laundering, human trafficking, or arms deals?”

image

Thanks, MSFT Copilot. Cutting down the cycles for free art, are you?

The problem is that specialized software is no longer secret. The Citizen Lab and the AP have been diligent in explaining how some of the tools work and what type of information can be gathered. My personal view is that information about these tools has been converted into college programming courses, open source software tools, and headline grabbing articles. I know from personal experience that most people do not have a clue how data from an iPhone can be exfiltrated, cross correlated, and used to track down those who would violate the laws of a nation state. But, as the saying goes, information wants to be free. Okay, it’s free. How about that?

The write up contains an interesting statement. I want to note that I am not plagiarizing, undermining advertising sales, or choking off subscriptions. I am offering the information as a peg on which to hang some observations. Here’s the quote:

“My heart sinks with each case we find,” Scott-Railton [a senior researcher at UT’s Citizen Lab] added. “This seems to be confirming our worst fear: Even when used in a democracy, this kind of spyware has an almost immutable abuse potential.”

Okay, we have malware, a command-and-control system, logs, and a variety of delivery mechanisms.

I am baffled because malware is used by both good and bad actors. Exactly what does the University of Toronto and the AP want to happen. The reality is that once secret information is leaked, it becomes the Teflon for rapidly diffusing applications. Does writing about what I view an “old” story change what’s happening with potent systems and methods? Will government officials join in a kumbaya moment and force the systems and methods to fall into disuse? Endless recycling of an instrumental action by this country or that agency gets us where?

In my opinion, the sensationalizing of behavior does not correlate with responsible use of technologies. I think the Pegasus story is a search for headlines or recognition for saying, “Look what we found. Country X is a problem!” Spare me. Change must occur within institutions. Those engaged in the use of intelware and related technologies are aware of issues. These are, in my experience, not ignored. Improper behavior is rampant in today’s datasphere.

Standing on the sidelines and yelling at a player who let the team down does what exactly? Perhaps a more constructive approach can be identified and offered as a solution beyond Pegasus again? Broken record. I know you are “just doing your job.” Fine but is there a new tune to play?

Stephen E Arnold, April l29, 2024

A Modern Spy Novel: A License to Snoop

April 29, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

UK’s Investigatory Powers Bill to Become Law Despite Tech World Opposition” reports the Investigatory Powers Amendment Bill or IPB is now a law. In a nutshell, the law expands the scope of data collection by law enforcement and intelligence services. The Register, a UK online publication, asserts:

Before the latest amendments came into force, the IPA already allowed authorized parties to gather swathes of information on UK citizens and tap into telecoms activity – phone calls and SMS texts. The IPB’s amendments add to the Act’s existing powers and help authorities trawl through more data, which the government claims is a way to tackle “modern” threats to national security and the abuse of children.

image

Thanks, Copilot. A couple of omissions from my prompt, but your illustration is good enough.

One UK elected official said:

“Additional safeguards have been introduced – notably, in the most recent round of amendments, a ‘triple-lock’ authorization process for surveillance of parliamentarians – but ultimately, the key elements of the Bill are as they were in early versions – the final version of the Bill still extends the scope to collect and process bulk datasets that are publicly available, for example.”

Privacy advocates are concerned about expanding data collections’ scope. The Register points out that “big tech” feels as though it is being put on the hot seat. The article includes this statement:

Abigail Burke, platform power program manager at the Open Rights Group, previously told The Register, before the IPB was debated in parliament, that the proposals amounted to an “attack on technology.”

Several observations:

  1. The UK is a member in good standing of an intelligence sharing entity which includes Australia, Canada, New Zealand, and the US. These nation states watch one another’s activities and sometimes emulate certain policies and legal frameworks.
  2. The IPA may be one additional step on a path leading to a ban on end-to-end-encrypted messaging. Such a ban, if passed, would prove disruptive to a number of business functions. Bad actors will ignore such a ban and continue their effort to stay ahead of law enforcement using homomorphic encryption and other sophisticated techniques to keep certain content private.
  3. Opportunistic messaging firms like Telegram may incorporate technologies which effectively exploit modern virtual servers and other technology to deploy networks which are hidden and effectively less easily “seen” by existing monitoring technologies. Bad actors can implement new methods forcing LE and intelligence professionals to operate in reaction mode. IPA is unlikely to change this cat-and-mouse game.
  4. Each day brings news of new security issues with widely used software and operating systems. Banning encryption may have some interesting downstream and unanticipated effects.

Net net: I am not sure that modern threats will decrease under IPA. Even countries with the most sophisticated software, hardware, and humanware security systems can be blindsided. Gaffes in Israel have had devastating consequences that an IPA-type approach would remedy.

Stephen E Arnold, April 29, 2024

Right, Professor. No One Is Using AI

April 29, 2024

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Artificial intelligence and algorithms aren’t new buzzwords but they are the favorite technology jargon being tossed around BI and IT water coolers. (Or would it be Zoom conferences these days?) AI has been a part of modern life for years but AI engines are finally “smart enough” to do actual jobs—sort of. There are still big problems with AI, but one expert shares his take on why the technology isn’t being adopted more in the UiPath article: “3 Common Barriers AI Adoption And How To Overcome Them.”

Whenever new technology hits the market, experts write lists about why more companies aren’t implementing it. The first “mistake” is lack of how to adopt AI because they don’t know about all the work processes within their companies. The way to overcome this issue is to take an inventory of the processes and this can be done via data mining. That’s not so simple if a company doesn’t have the software or know-how.

The second “mistake” is lack of expertise about the subject. The cure for this is classes and “active learning.” Isn’t that another term for continuing education? The third “mistake” is lack of trust and risks surrounding AI. Those exist because the technology is new and needs to be tested more before it’s deployed on a mass scale. Smaller companies don’t want to be guinea pigs so they wait until the technology becomes SOP.

AI is another tool that will become as ubiquitous as mobile phones but the expert is correct about this:”

These barriers are significant, but they pale in comparison to the risk of delaying AI adoption. Early adopters are finding new AI use cases and expanding their lead on the competition every day.

There’s lots to do to prepare your organization for this new era, but there’s also plenty of value and advantages waiting for you along your AI adoption journey. Automation can do a lot to help you move forward quickly to capture AI’s value across your organization.”

If your company finds an AI solution that works then that’s wonderful. Automation is part of advancing technology, but AI isn’t ready to be deployed by all companies. If something works for a business and it’s not too archaic than don’t fix what ain’t broke.

But students have figured out how to use AI to deal with certain professors. No, I am not mentioning any names.

Whitey Grace, April 29, 2024

AI Does Prediction about Humans: What Could Go Wrong

April 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The academic institution which took money from everyone’s favorite expert on exploitation has revealed an interesting chunk of research. Sadly it is about broader concept of exploitation than those laboring in his mansions. “MIT Study Reveals an AI  Model That Can Predict Future Actions of Human.” The title seems a bit incomplete, but no doubt Mr. Epstein would embrace the technology. Imagine. Feed in data about those with whom he employed and match the outputs to the interests of his clients and friends.

The write up says:

A new study from researchers at MIT and the University of Washington reveals an AI model that can accurately predict a person or a machine’s future actions.  The AI is known as the latent inference budget model (L-IBM). The study authors claim that L-IBM is better than other previously proposed frameworks capable of modeling human decision-making. It works by examining past behavior, actions, and limitations linked to the thinking process of an agent (which could be either a human or another AI). The data or result obtained after the assessment is called the inference budget.

Very academic sounding. I expected no less from MIT and its companion institution.

To model the decision-making process of an agent, L-IBM first analyzes an individual’s behavior and the different variables that affect it.  “In other words, we seek to model both what agents wish to do and what agents will actually do in any given state,” the researchers said. This step involved observing agents placed in a maze at random positions. The L-IBM model was then employed to understand their thinking/computational limitations and predict their behavior.

image

A predictive system allows for more efficient use of available resources. Smart software does not protest, require benefits, or take vacations. Thanks, MSFT Copilot. Good enough. Just four tries today.

The method seems less labor intensive that the old, cancer wizard IBM Watson relied upon. This model processes behavior data, not selected information; for example, cancer treatments. Then, the new system will observe actions and learn what those humans will do next.

Then the clever researchers arranged a game:

The researchers made the subjects play a reference game. The game involves a speaker and a listener. The latter receives a set of different colors, they pick one but can’t tell the name of the color they picked directly to the listener. The speaker describes the color for the speakers through natural language utterances (basically the speaker gives out different words as hints). If the listener selects the same color the speaker picked from the set, they both win. 

At this point in the write up, I was wondering how long the process requires and what the fully loaded costs would be to get one useful human prediction. The write up makes clear that more work was required. Now the model played chess with humans. (I thought the Google cracked this problem with DeepMind methods after IBM’s chess playing system beat up a world champion human.

One of the wizards is quoted in the write up as stating:

“For me, the most striking thing was the fact that this inference budget is very interpretable. It is saying tougher problems require more planning or being a strong player means planning for longer. When we first set out to do this, we didn’t think that our algorithm would be able to pick up on those behaviors naturally.

Yes, there are three steps. But the expert notes:

“We demonstrated that it can outperform classical models of bounded rationality while imputing meaningful measures of human skill and task difficulty,” the researchers note. If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have. Being able to model human behavior is an important step toward building an AI agent that can actually help that human…

If Mr. Epstein had access to a model with this capability, he might still be with us. Other applications of the technology may lead to control of malleable humans.

Net net: MIT is a source of interesting investigations like the one conducted after the Epstein antics became more widely known. Light the light of learning.

Stephen E Arnold, April 26, 2024

Not Only Those Chasing Tenure Hallucinate, But Some Citations Are Wonky Too

April 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read “ChatGPT Hallucinates Fake But Plausible Scientific Citations at a Staggering Rate, Study Finds.” Wow. “Staggering.” The write up asserts:

A recent study has found that scientific citations generated by ChatGPT often do not correspond to real academic work

In addition to creating non-reproducible research projects, now those “inventing the future” and “training tomorrow’s research leaders” appear to find smart software helpful in cooking up “proof” and “evidence” to help substantiate “original” research. Note: The quotes are for emphasis and added by the Beyond Search editor.

image

Good enough, ChatGPT. Is the researcher from Harvard health?

Research conducted by a Canadian outfit sparked this statement in the article:

…these fabricated citations feature elements such as legitimate researchers’ names and properly formatted digital object identifiers (DOIs), which could easily mislead both students and researchers.

The student who did the research told PsyPost:

“Hallucinated citations are easy to spot because they often contain real authors, journals, proper issue/volume numbers that match up with the date of publication, and DOIs that appear legitimate. However, when you examine hallucinated citations more closely, you will find that they are referring to work that does not exist.”

The researcher added:

“The degree of hallucination surprised me,” MacDonald told PsyPost. “Almost every single citation had hallucinated elements or were just entirely fake, but ChatGPT would offer summaries of this fake research that was convincing and well worded.”

My thought is that more work is needed to determine the frequency with which AI made up citations appear in papers destined for peer review or personal aggrandizement on services like ArXiv.

Coupled with the excitement of a president departing Stanford University and the hoo hah at Harvard related to “ethics” raises questions about the moral compass used by universities to guide their educational battleships. Now we learn that the professors are using AI and including made up or fake data in their work?

What’s the conclusion?

[a] On the beam and making ethical behavior part of the woodwork

[b] Supporting and rewarding crappy work

[c] Ignoring the reality that the institutions have degraded over time

[d] Scrolling TikTok looking for grant tips.

If you don’t know, ask You.com or a similar free smart service.

Stephen E Arnold, April 26, 2024

AI Girlfriends: The Opportunity of a Lifetime

April 26, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Relationships are hard. Navigating each other’s unique desires, personalities, and expectations can be a serious challenge. That is, if one is dating a real person. Why bother when, for just several thousand dollars a month, you can date a tailor-made AI instead? We learn from The Byte, “Tech Exec Predicts Billion-Dollar AI Girlfriend Industry.” Writer Noor Al-Sibai tells us:

“When witnessing the sorry state of men addicted to AI girlfriends, one Miami tech exec saw dollar signs instead of red flags. In a blog-length post on X-formerly-Twitter, former WeWork exec Greg Isenberg said that after meeting a young guy who claims to spend $10,000 a month on so-called ‘AI girlfriends,’ or relationship-simulating chatbots, he realized that eventually, someone is going to capitalize upon that market the way Match Group has with dating apps. ‘I thought he was kidding,’ Isenberg wrote. ‘But, he’s a 24-year-old single guy who loves it.’ To date, Match Group — which owns Tinder, Hinge, Match.com, OKCupid, Plenty of Fish, and several others — has a market cap of more than $9 billion. As the now-CEO of the Late Checkout holding company startup noted, someone is going to build the AI version and make a billion or more.”

Obviously. They are probably collaborating with the makers of sex robots already. Though many strongly object, it seems only a matter of time before fake women replace real ones for a significant number of men. Will this help assuage the loneliness epidemic, or only make it worse? There is also the digital privacy angle to consider. On the other hand, perhaps this is for the best in the long run. The Earth is overpopulated, anyway.

Cynthia Murrell, April 26, 2024

Telegram Barks, Whines, and Wants a Treat

April 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Tucker Carlson, an American TV star journalist lawyer person, had an opportunity to find his future elsewhere after changes at Rupert Murdoch’s talking heads channel. The future, it seems, is for Mr. Carlson to  present news via Telegram, which is an end-to-end-encrypted messaging platform. It features selectable levels of encryption. Click enough and the content of the data passed via the platform is an expensive and time consuming decryption job. Mr. Carlson wanted to know more about his new broadcast home. It appears that as part of the tie up between Mr. Carlson and Mr. Durov, the latter would agree to a one-hour interview with the usually low profile, free speech flag waver Pavel Durov. You can watch the video on YouTube and be monitored by those soon-to-be-gone cookies or on Telegram and be subject to its interesting free speech activities.

image

A person dressed in the uniform of an unfriendly enters the mess hall of a fighting force engaged in truth, justice, and the American way. The bold lad in red forgets he is dressed as an enemy combatant and does not understand why everyone is watching him with suspicion or laughter because he looks like a fool or a clueless dolt. Thanks, MSFT Copilot. Good enough. Any meetings in DC today about security?

Pavel Durov insists that he not as smart as his brother. He tells Mr. Carlson [bold added for emphasis. Editor]:

So Telegram has been a tool for those to a large extent. But it doesn’t really matter whether it’s opposition or the ruling party that is using Telegram for us. We apply the rules equally to all sides. We don’t become prejudiced in this way. It’s not that we are rooting for the opposition or we are rooting for the ruling party. It’s not that we don’t care. But we think it’s important to have this platform that is neutral to all voices because we believe that the competition of different ideas can result in progress and a better world for everyone. That’s  in stark contrast to say Facebook which has said in public. You know we tip the scale in favor of this or that movement and this or that country all far from the west and far from Western media attention. But they’ve said that what do you think of that tech companies choosing governments? I think that’s one of the reasons why we ended up here in the UAE out of all places right? You don’t want to be geopolitically aligned. You don’t want to select the winners in any of these political fights and that’s why you have to be in a neutral place.  … We believe that Humanity does need a neutral platform like Telegram that would be respectful to people’s privacy and freedoms.

Wow, the royal “we.” The word salad. Then the Apple editorial control.

Okay, the flag bearer for secure communications yada yada. Do I believe this not-as-smart-as-my-brother guy?

No.

Mr. Pavlov says one thing and then does another, endangering lives and creating turmoil among those who do require secure communications. Whom you may ask? How about intelligence operatives, certain war fighters in Ukraine and other countries in conflict, and experts working on sensitive commercial projects. Sure, bad actors use Telegram, but that’s what happens when one embraces free speech.

Now it seems that Mr. Durov has modified his position to sort-of free speech.

I learned this from articles like “Telegram to Block Certain Content for Ukrainian Users” and “Durov: Apple Demands to Ban Some Telegram Channels for Users with Ukrainian SIM Cards.”

In the interview between two estimable individuals, Mr. Durov made the point that he was approached by individuals working in US law enforcement. In very nice language, Mr. Durov explained they were inept, clumsy, and focused on getting access to the data in his platform. He pointed out that he headed off to Dubai, where he could operate without having to bow down, lick boots, sell out, or cooperate with some oafs in law enforcement.

But then, I read about Apple demanding that Telegram curtail free speech for “some” individuals. Well, isn’t that special? Say one thing, criticize law enforcement, and then roll over for Apple. That is a company, as I recall, which is super friendly with another nation state somewhat orthogonal to the US. Furthermore, Apple is proud of its efforts to protect privacy. Rumors suggest Apple is not too eager to help out some individuals investigating crimes because the sacred iPhone is above the requirements of a mere country… with exceptions, of course. Of course.

The article “Durov: Apple Demands to Ban Some Telegram Channels for Users with Ukrainian SIM Cards” reports:

Telegram founder Pavel Durov said that Apple had sent a request to block some Telegram channels for Ukrainian users. Although the platform’s community usually opposes such blocking, the company has to listen to such requests in order to keep the app available in the App Store.

Why roll over? The write up quotes Mr. Durov as saying:

…, it doesn’t always depend on us.

Us. The royal we again. The company is owned by Mr. Durov. The smarter brother is a math genius like two PhDs and there are about 50 employees. “Us.” Who are the people in the collective consisting of one horn blower?

Several observations:

  1. Apple has more power or influence over Telegram than law enforcement from a government
  2. Mr. Durov appears to say one thing and then do the opposite, thinking no one will notice maybe?
  3. Relying on Telegram for secure communications may not be the best idea I have heard today.

Net net: Is this a “signal” that absolutely no service can be trusted? I don’t have a scorecard for trust bandits, but I will start one I think. In the meantime, face-to-face in selected locations without mobile devices may be one option to explore, but it sure is easy to use Telegram to transmit useful information to a drone operator in order to obtain a desire outcome. Like Mr. Snowden, Mr. Durov has made a decision. Actions have consequences; word sewage may not.

Stephen E Arnold, April 25, 2024

AI Versus People? That Is Easy. AI

April 25, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I don’t like to include management information in Beyond Search. I have noticed more stories related to management decisions related to information technology. Here’s an example of my breaking my own editorial policies. Navigate to “SF Exec Defends Brutal Tech Trend: Lay Off Workers to Free Up Cash for AI.” I noted this passage:

Executives want fatter pockets for investing in artificial intelligence.

image

Okay, Mr. Efficiency and mobile phone betting addict, you have reached a logical decision. Why are there no pictures of friends, family, and achievements in your window office? Oh, that’s MSFT Copilot’s work. What’s that say?

I think this means that “people resources” can be dumped in order to free up cash to place bets on smart software. The write up explains the management decision making this way:

Dropbox’s layoff was largely aimed at freeing up cash to hire more engineers who are skilled in AI.

How expensive is AI for the big technology companies? The write up provides this factoid which comes from the masterful management bastion:

Google AI leader Demis Hassabis said the company would likely spend more than $100 billion developing AI.

Smart software is the next big thing. Big outfits like Amazon, Google, Facebook, and Microsoft believe it. Venture firms appear to be into AI. Software development outfits are beavering away with smart technology to make their already stellar “good enough” products even better.

Money buys innovation until it doesn’t. The reason is that the time from roll out to saturation can be difficult to predict. Look how long it has taken the smart phones to become marketing exercises, not technology demonstrations. How significant is saturation? Look at the machinations at Apple or CPUs that are increasingly difficult to differentiate for a person who wants to use a laptop for business.

There are benefits. These include:

  • Those getting fired can say, “AI RIF’ed me.”
  • Investments in AI can perk up investors.
  • Jargon-savvy consultants can land new clients.
  • Leadership teams can rise about termination because these wise professionals are the deciders.

A few downsides can be identified despite the immaturity of the sector:

  • Outputs can be incorrect leading to what might be called poor decisions. (Sorry, Ms. Smith, your child died because the smart dosage system malfunctioned.)
  • A large, no-man’s land is opening between the fast moving start ups who surf on cloud AI services and the behemoths providing access to expensive infrastructure. Who wants to operate in no-man’s land?
  • The lack of controls on smart software guarantee that bad actors will have ample tools with which to innovate.
  • Knock-on effects are difficult to predict.

Net net: AI may be diffusing more quickly and in ways some experts chose to ignore… until they are RIF’ed.

Stephen E Arnold, April 25, 2024

Next Page »

  • Archives

  • Recent Posts

  • Meta