Israeli Intelware: Is It Time to Question Its Value?

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

In 2013, I believe that was the year, I attended an ISS TeleStrategies Conference. A friend of mine wanted me to see his presentation, and I was able to pass the Scylla and Charybdis-inspired security process and listen to the talk. (Last week I referenced that talk and quoted a statement posted on a slide for everyone in attendance to view. Yep, a quote from 2013, maybe earlier.)

After the talk, I walked quickly through the ISS exhibit hall. I won’t name the firms exhibiting because some of these are history (failures), some are super stealthy, and others have been purchased by other outfits as the intelware roll ups continue. I do recall a large number of intelware companies with their headquarters in or near Tel Aviv, Israel. My impression, as I recall, was that Israel’s butt-kicking software could make sense of social media posts, Dark Web forum activity, Facebook craziness, and Twitter disinformation. These Israeli outfits were then the alpha vendors. Now? Well, maybe a bit less alpha drifting to beta or gamma.

10 8 intel wrong

One major to another: “Do you think our intel was wrong?” The other officer says, “I sat in a briefing teaching me that our smart software analyzed social media in real time. We cannot be surprised. We have the super duper intelware.” The major says, jarred by an explosion, “Looks like we were snookered by some Madison Avenue double talk. Let’s take cover.” Thanks, MidJourney. You do understand going down in flames. Is that because you are thinking about your future?

My impression was that the Israeli-developed software shared a number of functional and visual similarities. I asked people at the conference if they had noticed the dark themes, the similar if not identical timeline functions, and the fondness for maps on which data were plotted and projected. “Peas in a pod,” my friend, a former NATO officer told me. Are not peas alike?

The reason — and no one has really provided this information — is that the developers shared a foxhole. The government entities in Israel train people with the software and systems proven over the years to be useful. The young trainees carry their learnings forward in their career. Then when mustered out, a few bright sparks form companies or join intelware giants like Verint and continue to enhance existing tools or building new ones. The idea is that life in the foxhole imbues those who experience it with certain similar mental furniture. The ideas, myths, and software experiences form the muddy floor and dirt walls of the foxhole. I suppose one could call this “digital bias”, which later manifests itself in the dozens of Tel Aviv -based intelware, policeware, and spyware companies’ products and services.

Why am I mentioning this?

The reason is that I was shocked and troubled by the allegedly surprise attack. If you want to follow the activity, navigate to X.com and search that somewhat crippled system for #OSINT. Skip top and go to the “Latest” tab.

Several observations:

  1. Are the Israeli intelware products (many of which are controversial and expensive) flawed? Obviously excellent software processing “signals” was blind to the surprise attack, right?
  2. Are the Israeli professionals operating the software unable to use it to prevent surprise attacks? Obviously excellent software in the hands of well-trained professionals flags signals and allows action to be taken when warranted. Did that happen? Has Israeli intel training fallen short of its goal of protecting the nation? Hmmm. Maybe, yes.
  3. Have those who hype intelware and the excellence of a particular system and method been fooled, falling into the dark pit of OSINT blind spots like groupthink and “reasoning from anecdote, not fact”? I am leaning toward a “yes”, gentle reader.

The time for a critical look at what works and what doesn’t is what the British call “from this day” work. The years of marketing craziness is one thing, but when either the system or the method allows people to be killed without warning or cause broadcasts one message: “Folks, something is very, very wrong.”

Perhaps certification of these widely used systems is needed? Perhaps a hearing in an appropriate venue is warranted?

Blind spots can cause harm. Marketers can cause harm. Poorly trained operators can cause harm. Even foxholes require tidying up. Technology for intelligence applications is easy to talk about, but it is now clear to everyone engaged in making sense of signals, one country’s glamped up systems missed the wicket.

Stephen E Arnold, October 9, 2023

Canada vs. Google: Not a Fair Hockey Game

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I get a bit of a thrill when sophisticated generalist executives find themselves rejected by high-tech wizards. An amusing example of “Who is in charge here?” appears in “Google Rejects Trudeau’s Olive Branch, Threatens News Link Block Over New Law.”

image

A seasoned high-tech executive explains that the laptop cannot retrieve Canadian hockey news any longer. Thanks, Microsoft Bing. Nice maple leaf hat.

The write up states:

Alphabet Inc.’s Google moved closer to blocking Canadians from viewing news links on its search engine, after it rejected government regulations meant to placate its concerns about an impending online content law.

Yep, Canada may not be allowed into the select elite of Google users with news. Why? Canada passed a law with which Google does not agree. Imagine. Canada wants Google to pay for accessing, scraping, and linking to Canadian news.

Canada does not understand who is in charge. The Google is the go-to outfit. If you don’t believe me, just ask some of those Canadian law enforcement and intelligence analysts what online system is used to obtain high-value information. Hint. It is not yandex.ru.

The write up adds:

Google already threatened to remove links to news, and tested blocking such content for a small percentage of users in Canada earlier this year. On Friday, it went further, implying a block could be imminent as the current regulations would force the company to participate in the mandatory bargaining process while it applies for exemption.

Will the Google thwart the Canadian government? Based on the importance of the Google system to certain government interests, a deal of some sort seems likely. But Google could just buy Canada and hire some gig workers to run the country.

Stephen E Arnold, October 9, 2023

9 Cognitive Blind Spot 3: You Trust Your Instincts, Right?

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

ChatGPT became available in the autumn of 2022. By December, a young person fell in love with his chatbot. From this dinobaby’s point of view, that was quicker than a love affair ignited by a dating app. “Treason Case: What Are the Dangers of AI Chatbots?” misses the point of its own reporter’s story. The Beeb puts the blame on Jaswant Singh Chail, not the software. Justice needs an individual, not a pride of zeros and ones.

10 6 trust me

A bad actor tries to convince other criminals that he is honest, loyal, trustworthy, and an all-around great person. “Trust me,” he says. Some of those listening to the words are skeptical. Thanks, MidJourney. You are getting better at depicting duplicity.

Here’s the story: Shortly after discovering an online chatbot, Mr. Chail fell in love with “an online companion.” The Replika app allows a user to craft a chatbot. The protagonist in this love story promptly moved from casual chit chat to emotional attachment. As the narrative arc unfolded, Mr. Chail confessed that he was an assassin, and he wanted to kill the Queen of England. Mr. Chail planned on using a crossbow.

The article reports:

Marjorie Wallace, founder and chief executive of mental health charity SANE, says the Chail case demonstrates that, for vulnerable people, relying on AI friendships could have disturbing consequences. “The rapid rise of artificial intelligence has a new and concerning impact on people who suffer from depression, delusions, loneliness and other mental health conditions,” she says.

  That seems reasonable. The software meshed nicely with the cognitive blind spot of trusting one’s intuition. Some call this “gut” feel. The label is less important in the confusion of software with reality.

But what happens when the new Google Pixel 8 camera enhances an image automatically. Who wants a lousy snap? Google appears to favor a Mother Google approach. When an image is manipulated either in a still or video, what does one’s gut say, “I trust pictures and videos for accuracy.” Like the young would be and off-the-rails chatbot lover, zeros and ones can create some interesting effects.

What about you, gentle reader? Do you know how to recognize an unhealthy interaction with smart software? Can you determine if an image is “real” or the fabrication of a large outfit like Google?

Stephen E Arnold, October 9, 2023

Smart Productivity Software Means Pink Slip Flood

October 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Ready for some excitement, you under 50s?

Soon, workers may be spared the pain of training their replacements. Consciously, anyway. Wired reports, “Your Boss’s Spyware Could Train AI to Replace You.” Researcher Carl Frey’s landmark 2013 prediction that AI could threated half of US jobs has not yet come to pass. Now that current tools like ChatGPT have proven (so far) less accurate and self-sufficient than advertised, some workers are breathing a sigh of relief. Not so fast, warns journalist Thor Benson. It is the growingly pervasive “productivity” (aka monitoring) software we need to be concerned about. Benson writes:

“Enter corporate spyware, invasive monitoring apps that allow bosses to keep close tabs on everything their employees are doing—collecting reams of data that could come into play here in interesting ways. Corporations, which are monitoring their employees on a large scale, are now having workers utilize AI tools more frequently, and many questions remain regarding how the many AI tools that are currently being developed are being trained. Put all of this together and there’s the potential that companies could use data they’ve harvested from workers—by monitoring them and having them interact with AI that can learn from them—to develop new AI programs that could actually replace them. If your boss can figure out exactly how you do your job, and an AI program is learning from the data you’re producing, then eventually your boss might be able to just have the program do the job instead.”

Even at companies that do not use spyware, employees may unwittingly train their AI replacements simply by generating data as part of their work. To make matters worse, because it gets neither salary nor benefits, an algorithm need not exceed or even match a human’s performance to land the job.

So what can we do? We could retrain workers but, as MIT economics professor David Autor notes, that is not one of the US’s strong suits. Or we could take a cue from the Industrial Revolution: Frey points to Britain’s Poor Laws, which gave financial relief to workers whose jobs became obsolete back then. Hmm, we wonder: How would a similar measure fair in the current US Congress?

Cynthia Murrell, October 9, 2023

Cognitive Blind Spot 2: Bandwagon Surfing or Do What May Be Fashionable

October 6, 2023

Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

Humans are into trends. The NFL and Taylor Swift appear to be a trend. A sporting money machine and a popular music money machine. Jersey sales increase. Ms. Swift’s music sales go up. New eyeballs track a certain football player. The question is, “Who is exploiting whom?”

Which bandwagon are you riding? Thank you, MidJourney. Gloom seems to be part of your DNA.
Think about large language models and smart software. A similar dynamic may exist. Late in 2022, the natural language interface became the next big thing. Students and bad actors figured out that using a ChatGPT-type service could expedite certain activities. Students could be 500 word essays in less than a minute. Bad actors could be snippets of code in seconds. In short, many people were hopping on the LLM bandwagon decorated with smart software logos.

Now a bandwagon powered by healthy skepticism may be heading toward main street. Wired Magazine published a short essay titled “Chatbot Hallucinations Are Poisoning Web Search.” The foundational assumption is that Web search was better before ChatGPT-type incursions. I am not sure that idea is valid, but for the purposes of illustrating bandwagon surfing, it will pass unchallenged. Wired’s main point is that as AI-generated content proliferates, the results delivered by Google and a couple of other but vastly less popular search engines will deteriorate. I think this is a way to assert that lousy LLM output will make Web search worse. “Hallucination” is jargon for made up or just incorrect information.

Consider this essay “Evaluating LLMs Is a Minefield.” The essay and slide deck are the work of two AI wizards. The main idea is that figuring out whether a particular LLM or a ChatGPT-service is right, wrong, less wrong, more right, biased, or a digital representation of a 23 year old art history major working in a public relations firm is difficult.

I am not going to take the side of either referenced article. The point is that the hyperbolic excitement about “smart software” seems to be giving way to LLM criticism. From software for Every Man, the services are becoming tools for improving productivity.

To sum up, the original bandwagon has been pushed out of the parade by a new bandwagon filled with poobahs explaining that smart software, LLM, et al are making the murky, mysterious Web worse.

The question becomes, “Are you jumping on the bandwagon with the banner that says, “LLMs are really bad?” or are you sticking with the rah rah crowd? The point is that information at one point was good. Now information is less good. Imagine how difficult it will be to determine what’s right or wrong, biased or unbiased, or acceptable or unacceptable.

Who wants to do the work to determine provenance or answer questions about accuracy? Not many people. That, rather then lousy Web search, may be more important to some professionals. But that does not solve the problem of the time and resources required to deal with accuracy and other issues.

So which bandwagon are you riding? The NFL or Taylor Swift? Maybe the tension between the two?

Stephen E Arnold, October 6, 2023

Is Google Setting a Trap for Its AI Competition

October 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

The litigation about the use of Web content to train smart generative software is ramping up. Outfits like OpenAI, Microsoft, and Amazon and its new best friend will be snagged in the US legal system.

But what big outfit will be ready to offer those hungry to use smart software without legal risk? The answer is the Google.

How is this going to work?

simple. Google is beavering away with its synthetic data. Some real data are used to train sophisticated stacks of numerical recipes. The idea is that these algorithms will be “good enough”; thus, the need for “real” information is obviated. And Google has another trick up its sleeve. The company has coveys of coders working on trimmed down systems and methods. The idea is that using less information will produce more and better results than the crazy idea of indexing content from wherever in real time. The small data can be licensed when the competitors are spending their days with lawyers.

How do I know this? I don’t but Google is providing tantalizing clues in marketing collateral like “Researchers from the University of Washington and Google have Developed Distilling Step-by-Step Technology to Train a Dedicated Small Machine Learning Model with Less Data.” The author is a student who provides sources for the information about the “less is more” approach to smart software training.

And, may the Googlers sing her praises, she cites Google technical papers. In fact, one of the papers is described by the fledgling Googler as “groundbreaking.” Okay.

What’s really being broken is the approach of some of Google’s most formidable competition.

When will the Google spring its trap? It won’t. But as the competitors get stuck in legal mud, the Google will be an increasingly attractive alternative.

The last line of the Google marketing piece says:

Check out the Paper and Google AI Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

Get that young marketer a Google mouse pad.

Stephen E Arnold, October 6, 2023

The Google and Its AI Peers Guzzle Water. Yep, Guzzle

October 6, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Much has been written about generative AI’s capabilities and its potential ramifications for business and society. Less has been stated about its environmental impact. The AP highlights this facet of the current craze in its article, “Artificial Intelligence Technology Behind ChatGPT Was Built in Iowa—With a Lot of Water.” Iowa? Who knew? Turns out, there is good reason to base machine learning operations, especially the training, in such a chilly environment. Reporters Matt O’Brien and Hannah Fingerhut write:

“Building a large language model requires analyzing patterns across a huge trove of human-written text. All of that computing takes a lot of electricity and generates a lot of heat. To keep it cool on hot days, data centers need to pump in water — often to a cooling tower outside its warehouse-sized buildings. In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons, or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research.”

During the same period, Google’s water usage surge by 20% according to the company. Notably, Google was strategic about where it guzzled this precious resource: it kept usage steady in Oregon, where there was already criticism about its water usage. But its consumption doubled outside Las Vegas, famously one of the nation’s hottest and driest regions. Des Moines, Iowa, on the other hand is a much cooler and wetter locale. We learn:

“In some ways, West Des Moines is a relatively efficient place to train a powerful AI system, especially compared to Microsoft’s data centers in Arizona that consume far more water for the same computing demand. … For much of the year, Iowa’s weather is cool enough for Microsoft to use outside air to keep the supercomputer running properly and vent heat out of the building. Only when the temperature exceeds 29.3 degrees Celsius (about 85 degrees Fahrenheit) does it withdraw water, the company has said in a public disclosure.”

Though merely a trickle compared to what the same work would take in Arizona, that summer usage is still a lot of water. Microsoft’s Iowa data centers swilled about 11.5 million gallons in July of 2022, the month just before GPT-4 graduated training. Naturally, both Microsoft and Google insist they are researching ways to use less water. It be nice if environmental protection were more than an afterthought.

The write-up introduces us to Shaolei Ren, a researcher at the University of California, Riverside. His team is working to calculate the environmental impact of generative AI enthusiasm. Their paper is due later this year, but they estimate ChatGPT swigs more than 16 ounces of water for every five to 50 prompts, depending on the servers’ location and the season. Will big tech find a way to curb AI’s thirst before it drinks us dry?

Cynthia Murrell, October 6, 2023

Cognitive Blind Spot 1: Can You Identify Synthetic Data? Better Learn.

October 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

It has been a killer with the back-to-back trips to Europe and then to the intellectual hub of the old-fashioned America. In France, I visited a location allegedly the office of a company which “owns” the domain rrrrrrrrrrr.com. No luck. Fake address. I then visited a semi-sensitive area in Paris, walking around in the confused fog only a 78 year old can generate. My goal was to spot a special type of surveillance camera designed to provide data to a smart software system. The idea is that the images can be monitored through time so a vehicle making frequent passes of a structure can be flagged, its number tag read, and a bit of thought given to answer the question, “Why?” I visited with a friend and big brain who was one of the technical keystones of an advanced search system. He gave me his most recent book and I paid for my Orangina. Exciting.

10 5 financial documents

One executive tells his boss, “Sir, our team of sophisticated experts reviewed these documents. The documents passed scrutiny.” One of the “smartest people in the room” asks, “Where are we going for lunch today?” Thanks, MidJourney. You do understand executive stereotypes, don’t you?

On the flights, I did some thinking about synthetic data. I am not sure that most people can provide a definition which will embrace the Google’s efforts in the money saving land of synthetic. I don’t think too many people know about Charlie Javice’s use of synthetic data to whip up JPMC’s enthusiasm for her company Frank Financial. I don’t think most people understand that when typing a phrase into the Twitch AI Jesus that software will output a video and mostly crazy talk along with some Christian lingo.

The purpose of this short blog post is to present an example of synthetic data and conclude by revisiting the question, “Can You Identify Synthetic Data?” The article I want to use as a hook for this essay is from Fortune Magazine. I love that name, and I think the wolves of Wall Street find it euphonious as well. Here’s the title: “Delta Is Fourth Major U.S. Airline to Find Fake Jet Aircraft Engine Parts with Forged Airworthiness Documents from U.K. Company.”

The write up states:

Delta Air Lines Inc. has discovered unapproved components in “a small number” of its jet aircraft engines, becoming the latest carrier and fourth major US airline to disclose the use of fake parts.  The suspect components — which Delta declined to identify — were found on an unspecified number of its engines, a company spokesman said Monday. Those engines account for less than 1% of the more than 2,100 power plants on its mainline fleet, the spokesman said. 

Okay, bad parts can fail. If the failure is in a critical component of a jet engine, the aircraft could — note that I am using the word could — experience a catastrophic failure. Translating catastrophic into more colloquial lingo, the sentence means catch fire and crash or something slightly less terrible; namely, catch fire, explode, eject metal shards into the tail assembly, or make a loud noise and emit smoke. Exciting, just not terminal.

I don’t want to get into how the synthetic or fake data made its way through the UK company, the UK bureaucracy, the Delta procurement process, and into the hands of the mechanics working in the US or offshore. The fake data did elude scrutiny for some reason. With money being of paramount importance, my hunch is that saving some money played a role.

If organizations cannot spot fake data when it relates to a physical and mission-critical component, how will organizations deal with fake data generated by smart software. The smart software can get it wrong because an engineer-programmer screwed up his or her math or the complex web of algorithms just generate unanticipated behaviors from dependencies no one knew to check and validate.

What happens when computers which many people are “always” more right than a human, says, “Here’s the answer.” Many humans will skip the hard work because they are in a hurry, have no appetite for grunt work, or are scheduled by a Microsoft calendar to do something else when the quality assurance testing is supposed to take place.

Let’s go back to the question in the title of the blog post, “Can You Identify Synthetic Data?”

I don’t want to forget this part of the title, “Better learn.”

JPMC paid out more than $100 million in November 2022 because some of the smartest guys in the room weren’t that smart. But get this. JPMC is a big, rich bank. People who could die because of synthetic data are a different kettle of fish. Yeah, that’s what I thought about as I flew Delta back to the US from Paris. At the time, I thought Delta had not fallen prey to the scam.

I was wrong. Hence, I “better learn” myself.

Stephen E Arnold, October 5, 2023

What Type of Employee? What about Those Who Work at McKinsey & Co.?

October 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Yes, I read When McKinsey Comes to Town: The Hidden Influence of the World’s Most Powerful Consulting Firm by Walt Bogdanich and Michael Forsythe. No, I was not motivated to think happy thoughts about the estimable organization. Why? Oh, I suppose the image of the opioid addicts in southern Indiana, Kentucky, and West Virginia rained on the parade.

I did scan a “thought piece” written by McKinsey professionals, probably a PR person, certainly an attorney, and possibly a partner who owned the project. The essay’s title is “McKinsey Just Dropped a Report on the 6 Employee Archetypes. Good News for Some Organizations, Terrible for Others. What Type of Dis-Engaged Employee Is On Your Team?” The title was the tip off a PR person was involved. My hunch is that the McKinsey professionals want to generate some bookings for employee assessment studies. What better way than converting some proprietary McKinsey information into a white paper and then getting the white paper in front of an editor at an “influence center.” The answer to the question, obviously, is hire McKinsey and the firm will tell you whom to cull.

Inc. converts the white paper into an article and McKinsey defines the six types of employees. From my point of view, this is standard blue chip consulting information production. However, there was one comment which caught my attention:

Approximately 4 percent of employees fall into the “Thriving Stars” category, represent top talent that brings exceptional value to the organization. These individuals maintain high levels of well-being and performance and create a positive impact on their teams. However, they are at risk of burnout due to high workloads.

Now what type of company hires these four percenters? Why blue chip consulting companies like McKinsey, Bain, BCG, Booz Allen, etc. And what are the contributions these firms’ professionals make to society. Jump back to When McKinsey Comes to Town. One of the highlights of that book is the discussion of the consulting firm’s role in the opioid epidemic.

That’s an achievement of which to be proud. Oh, and the other five types of employees. Don’t bother to apply for a job at the blue chip outfits.

Stephen E Arnold, October 4, 2023

Kagi Rolls Out a Small Web Initiative

October 5, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Recall the early expectations for the Web: It would be a powerful conduit for instant connection and knowledge-sharing around the world. Despite promises to the contrary, that rosy vision has long since given way to commercial interests’ paid content, targeted ads, bots, and data harvesting. Launched in 2018, Kagi offers a way to circumvent those factors with its ad-free, data protecting search engine—for a small fee, naturally. Now the company is promoting what it calls the Kagi Small Web initiative. We learn from the blog post:

“Since inception, we’ve been featuring content from the small web through our proprietary Teclis and TinyGem search indexes. This inclusion of high-quality, lesser-known parts of the web is part of what sets Kagi’s search results apart and gives them a unique flavor. Today we’re taking this a step further by integrating Kagi Small Web results into the index.”

See the write-up for examples. Besides these insertions into search results, one can also access these harder-to-find sources at the new Kagi Small Web website. This project displays a different random, recent Web page with each click of the “Next Post” button. Readers are also encouraged to check out their experimental Small YouTube, which we are told features content by YouTube creators with fewer than 4,000 subscribers. (Although as of this writing, the Small YouTube link supplied redirects right back to the source blog post. Hmm.)

The write-up concludes with these thoughts on Kagi’s philosophy:

“The driving question behind this initiative was simple yet profound: the web is made of millions of humans, so where are they? Why do they get overshadowed in traditional search engines, and how can we remedy this? This project required a certain leap of faith as the content we crawl may contain anything, and we are putting our reputation on the line vouching for it. But we also recognize that the ‘small web’ is the lifeblood of the internet, and the web we are fighting for. Those who contribute to it have already taken their own leaps of faith, often taking time and effort to create, without the assurance of an audience. Our goal is to change that narrative. Together with the global community of people who envision a different web, we’re committed to revitalizing a digital space abundant in creativity, self-expression, and meaningful content – a more humane web for all.”

Does this suggest that Google Programmable Search Engine is a weak sister?

Cynthia Murrell, October 5, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta