Engineering Trust: Will Weaponized Data Patch the Social Fabric?

March 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Trust is a popular word. Google wants me to trust the company. Yeah, I will jump right on that. Politicians want me to trust their attestations that citizen interest are important. I worked in Washington, DC, for too long. Nope, I just have too much first-hand exposure to the way “things work.” What about my bank? It wants me to trust it. But isn’t the institution the subject of a a couple of government investigations? Oh, not important. And what about the images I see when I walk gingerly between the guard rails. I trust them right? Ho ho ho.

In our post-Covid, pre-US national election, the word “trust” is carrying quite a bit of freight. Whom to I trust? Not too many people. What about good old Socrates who was an Athenian when Greece was not yet a collection of ferocious football teams and sun seekers. As you may recall, he trusted fellow residents of Athens. He end up dead from either a lousy snack bar meal and beverage, or his friends did him in.

One of his alleged precepts in his pre-artificial intelligence worlds was:

“We cannot live better than in seeking to become better.” — Socrates

Got it, Soc.

image

Thanks MSFT Copilot and provider of PC “moments.” Good enough.

I read “Exclusive: Public Trust in AI Is Sinking across the Board.” Then I thought about Socrates being convicted for corruption of youth. See. Education does not bring unlimited benefits. Apparently Socrates asked annoying questions which open him to charges of impiety. (Side note: Hey, Socrates, go with the flow. Just pray to the carved mythical beast, okay?)

A loss of public trust? Who knew? I thought it was common courtesy, a desire to discuss and compromise, not whip out a weapon and shoot, bludgeon, or stab someone to death. In the case of Haiti, a twist is that a victim is bound and then barbequed in a steel drum. Cute and to me a variation of stacking seven tires in a pile dousing them with gasoline, inserting a person, and igniting the combo. I noted a variation in the Ukraine. Elderly women make cookies laced with poison and provide them to special operation fighters. Subtle and effective due to troop attrition I hear. Should I trust US Girl Scout cookies? No thanks.

What’s interesting about the write up is that it provides statistics to back up this brilliant and innovative insight about modern life is its focus on artificial intelligence. Let me pluck several examples from the dot point filled write up:

  1. “Globally, trust in AI companies has dropped to 53%, down from 61% five years ago.”
  2. “Trust in AI is low across political lines. Democrats trust in AI companies is 38%, independents are at 25% and Republicans at 24%.”
  3. “Eight years ago, technology was the leading industry in trust in 90% of the countries Edelman studies. Today, it is the most trusted in only half of countries.”

AI is trendy; crunchy click bait is highly desirable even for an estimable survivor of Silicon Valley style news reporting.

Let me offer several observations which may either be troubling or typical outputs from a dinobaby working in an underground computer facility:

  1. Close knit groups are more likely to have some concept of trust. The exception, of course, is the behavior of the Hatfields and McCoys
  2. Outsiders are viewed with suspicion. Often for now reason, a newcomer becomes the default bad entity
  3. In my lifetime, I have watched institutions take actions which erode trust on a consistent basis.

Net net: Old news. AI is not new. Hyperbole and click obsession are factors which illustrate the erosion of social cohesion. Get used to it.

Stephen E Arnold, March 7, 2024

NSO Group: Pegasus Code Wings Its Way to Meta and Mr. Zuckerberg

March 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

NSO Group’s senior managers and legal eagles will have an opportunity to become familiar with an okay Brazilian restaurant and a waffle shop. That lovable leader of Facebook, Instagram, Threads, and WhatsApp may have put a stick in the now-ageing digital bicycle doing business as NSO Group. The company’s mark is pegasus, which is a flying horse. Pegasus’s dad was Poseidon, and his mom was the knock out Gorgon Medusa, who did some innovative hair treatments. The mythical pegasus helped out other gods until Zeus stepped in an acted with extreme prejudice. Quite a myth.

image

Poseidon decides to kill the mythical Pegasus, not for its software, but for its getting out of bounds. Thanks, MSFT Copilot. Close enough.

Life imitates myth. “Court Orders Maker of Pegasus Spyware to Hand Over Code to WhatsApp” reports that the hand over decision:

is a major legal victory for WhatsApp, the Meta-owned communication app which has been embroiled in a lawsuit against NSO since 2019, when it alleged that the Israeli company’s spyware had been used against 1,400 WhatsApp users over a two-week period. NSO’s Pegasus code, and code for other surveillance products it sells, is seen as a closely and highly sought state secret. NSO is closely regulated by the Israeli ministry of defense, which must review and approve the sale of all licenses to foreign governments.

NSO Group hired former DHS and NSA official Stewart Baker to fix up NSO Group gyro compass. Mr. Baker, who is a podcaster and affiliated with the law firm Steptoe and Johnson. For more color about Mr. Baker, please scan “Former DHS/NSA Official Stewart Baker Decides He Can Help NSO Group Turn A Profit.”

A decade ago, Israel’s senior officials might have been able to prevent a social media company from getting a copy of the Pegasus source code. Not anymore. Israel’s home-grown intelware technology simply did not thwart, prevent, or warn about the Hamas attack in the autumn of 2023. If NSO Group were battling in court with Harris Corp., Textron, or Harris Corp., I would not worry. Mr. Zuckerberg’s companies are not directly involved with national security technology. From what I have heard at conferences, Mr. Zuckerberg’s commercial enterprises are responsive to law enforcement requests when a bad actor uses Facebook for an allegedly illegal activity. But Mr. Zuckerberg’s managers are really busy with higher priority tasks. Some folks engaged in investigations of serious crimes must be patient. Presumably the investigators can pass their time scrolling through #Shorts. If the Guardian’s article is accurate, now those Facebook employees can learn how Pegasus works. Will any of those learnings stick? One hopes not.

Several observations:

  1. Companies which make specialized software guard their systems and methods carefully. Well, that used to be true.
  2. The reorganization of NSO Group has not lowered the firm’s public relations profile. NSO Group can make headlines, which may not be desirable for those engaged in national security.
  3. Disclosure of the specific Pegasus systems and methods will get a warm, enthusiastic reception from those who exchange ideas for malware and related tools on private Telegram channels, Dark Web discussion groups, or via one of the “stealth” communication services which pop up like mushrooms after rain in rural Kentucky.

Will the software Pegasus be terminated? I remain concerned that source code revealing how to perform certain tasks may lead to downstream, unintended consequences. Specialized software companies try to operate with maximum security. Now Pegasus may be flying away unless another legal action prevents this.

Where is Zeus when one needs him?

Stephen E Arnold, March 7, 2024

AI and Warfare: Gaza Allegedly Experiences AI-Enabled Drone Attacks

March 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

We have officially crossed a line. DeepNewz reveals: “AI-Enabled Military Tech and Indian-Made Hermes 900 Drones Deployed in Gaza.” It this what they mean by “helpful AI”? We cannot say we are surprised. The extremely brief write-up tells us:

“Reports indicate that Israel has deployed AI-enabled military technology in Gaza, marking the first known combat use of such technology. Additionally, Indian-made Hermes 900 drones, produced in collaboration between Adani‘s company and Elbit Systems, are set to join the Israeli army’s fleet of unmanned aerial vehicles. This development has sparked fears about the implications of autonomous weapons in warfare and the role of Indian manufacturing in the conflict in Gaza. Human rights activists and defense analysts are particularly worried about the potential for increased civilian casualties and the further escalation of the conflict.”

On a minor but poetic note, a disclaimer states the post was written with ChatGPT. Strap in, fellow humans. We are just at the beginning of a long and peculiar ride. How are those assorted government committees doing with their AI policy planning?

Cynthia Murrell, March 7, 2024

Kagi Hitches Up with Wolfram

March 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Kagi + Wolfram” reports that the for-fee Web search engine with AI has hooked up with one of the pre-eminent mathy people innovating today. The write up includes PR about the upsides of Kagi search and Wolfram’s computational services. The article states:

…we have partnered with Wolfram|Alpha, a well-respected computational knowledge engine. By integrating Wolfram Alpha’s extensive knowledge base and robust algorithms into Kagi’s search platform, we aim to deliver more precise, reliable, and comprehensive search results to our users. This partnership represents a significant step forward in our goal to provide a search engine that users can trust to find the dependable information they need quickly and easily. In addition, we are very pleased to welcome Stephen Wolfram to Kagi’s board of advisors.

image

The basic wagon gets a rethink with other animals given a chance to make progress. Thanks, MSFT Copilot. Good enough, but in truth I gave up trying to get a similar image with the dog replaced by a mathematician and the pig replaced with a perky entrepreneur.

The integration of mathiness with smart search is a step forward, certainly more impressive than other firms’ recycling of Web content into bubble gum cards presenting answer. Kagi is taking steps — small, methodical ones — toward what I have described as “search enabled applications” and my friend Dr. Greg Grefenstette described in his book with the snappy title “Search-Based Applications: At the Confluence of Search and Database Technologies (Synthesis Lectures on Information Concepts, Retrieval, and Services, 17).”

It may seem like a big step from putting mathiness in a Web search engine to creating a platform for search enabled applications. It may be, but I like to think that some bright young sprouts will figure out that linking a mostly brain dead legacy app with a Kagi-Wolfram service might be useful in a number of disciplines. Even some super confident really brilliantly wonderful Googlers might find the service useful.

Net net: I am gratified that Kagi’s for-fee Web search is evolving. Google’s apparent ineptitude might give Kagi the chance Neeva never had.

Stephen E Arnold, March 6, 2024

Philosophy and Money: Adam Smith Remains Flexible

March 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the early twenty-first century, China was slated to overtake the United States as the world’s top economy. Unfortunately for the “sleeping dragon,” China’s economy has tanked due to many factors. The country, however, still remains a strong spot for technology development such as AI and chips. The Register explains why China is still doing well in the tech sector: “How Did China Get So Good At Chips And AI? Congressional Investigation Blames American Venture Capitalists.”

Venture capitalists are always interested in increasing their wealth and subverting anything preventing that. While the US government has choked China’s semiconductor industry and denying it the use of tools to develop AI, venture capitalists are funding those sectors. The US’s House Select Committee on the China Communist Party (CCP) shared that five venture capitalists are funneling billions into these two industries: Walden International, Sequoia Capital, Qualcomm Ventures, GSR Ventures, and GGV Capital. Chinese semiconductor and AI businesses are linked to human rights abuses and the People’s Liberation Army. These five venture capitalist firms don’t appear interested in respecting human rights or preventing the spread of communism.

The House Select Committee on the CCP discovered that one $1.9 million went to AI companies that support China’s mega-surveillance state and aided in the Uyghur genocide. The US blacklisted these AI-related companies. The committee also found that $1.2 bullion was sent to 150 semiconductor companies.

The committee also accused of sharing more than funding with China:

“The committee also called out the VCs for "intangible" contributions – including consulting, talent acquisition, and market opportunities. In one example highlighted in the report, the committee singled out Walden International chairman Lip-Bu Tan, who previously served as the CEO of Cadence Design Systems. Cadence develops electronic design automation software which Chinese corporates, like Huawei, are actively trying to replicate. The committee alleges that Tan and other partners at Walden coordinated business opportunities and provided subject-matter expertise while holding board seats at SMIC and Advanced Micro-Fabrication Equipment Co. (AMEC).”

Sharing knowledge and business connections is equally bad (if not worse) than funding China’s tech sector. It’s like providing instructions and resources on how to build nuclear weapon. If China only had the resources it wouldn’t be as frightening.

Whitney Grace, March 6, 2024

Poohbahs Poohbahing: Just Obvious Poohbahing

March 6, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

We’re already feeling the effects of AI technology in deepfake videos and soundbites and generative text. While our present circumstances are our the beginning of AI technology, so-called experts are already claiming AI has gone bananas. The Verge, a popular Silicon Valley news outlet, released a new podcast episode where they declare that, “The AIs Are Officially Out Of Control.”

AI generated images and text aren’t 100% accurate. AI images are prone to include extra limbs, false representations of people, and even entirely miss the prompt. AI generative text is about as accurate as a Wikipedia article, so you need to double check and edit the response. Unfortunately AI are only as smart as the datasets that program them. AIs have been called “racist”and “sexist” due to limited data. Google Gemini also has gone too far on diversity and inclusion returning images that aren’t historically accurate when asked to deliver.

The podcast panelists made an obvious point when the pundits said that Google’s results qualities have declined. Bad SEO, crappy content, and paid results pollute search. They claim that the best results Google returns are coming from Reddit posts. Reddit is a catch-all online forum that Google recently negotiated deal with to use its content to train AI. That’s a great idea, especially when Reddit is going public on the stock market.

The problem is that Reddit is full of trolls who do things for %*^ and giggles. While Reddit is a brilliant source of information because it is created by real people, the bad actors will train the AI-chatbots to be “racist” and “sexist” like previous iterations. The worst incident involves ethnically diverse Nazis:

“Google has apologized for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool, saying its attempts at creating a “wide range” of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.”

I am not sure which is the problem: Uninformed generalizations, flawed AI technology capable of zapping billions in a few hours, or minimum viable products are the equivalent of a blue jay fouling up a sparrow’s nest. Chirp. Chirp. Chirp.

Whitney Grace, March 6, 2024

The RCMP: Monitoring Sparks Criticism

March 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The United States and United Kingdom receive bad reps for monitoring their citizens’ Internet usage. Thankfully it is not as bad as China, Russia, and North Korea. The “hat” of the United States is hardly criticized for anything, but even Canada has its foibles. Canada’s Royal Canadian Mounted Police (RCMP) is in water hot enough to melt all its snow says The Madras Tribune: “RCMP Slammed For Private Surveillance Used To Trawl Social Media, ‘Darknet’.”

It’s been known that the RCMP has used private surveillance tools to monitor public facing information and other social media since 2015. The Privacy Commissioner of Canada (OPC) revealed that when the RCMP was collecting information, the police force failed to comply with privacy laws. The RCMP also doesn’t agree with the OPC’s suggestions to make their monitoring activities with third party vendors more transparent. The RCMP also argued that because they were using third party vendors they weren’t required to ensure that information was collected according to Canadian law.

The Mounties’ non-compliance began in 2014 after three police officers were shot. An information monitoring initiative called Project Wideawake started and it involved the software Babel X from Babel Street, a US threat intelligence company. Babel X allowed the RCMP to search social media accounts, including private ones, and information from third party data brokers.

Despite the backlash, the RCMP will continue to use Babel X:

“ ‘Despite the gaps in (the RCMP’s) assessment of compliance with Canadian privacy legislation that our report identifies, the RCMP asserted that it has done enough to review Babel X and will therefore continue to use it,’ the report noted. ‘In our view, the fact that the RCMP chose a subcontracting model to pay for access to services from a range of vendors does not abrogate its responsibility with respect to the services that it receives from each vendor.’”

Canada might be the politest of country in North America, but its government hides a facade dedicated to law enforcement as much as the US.

Whitney Grace, March 5, 2024

Just One Big Google Zircon Gemstone for March 5, 2024

March 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I have a folder stuffed with Google gems for the week of February 26 to March 1, 12023. I have a write up capturing more Australians stranded by following Google Maps’s representation of a territory, Google’s getting tangled in another publisher lawsuit, Google figuring out how to deliver better search even when the user’s network connection sucks, Google’s firing 43 unionized contractors while in the midst of a legal action, and more.

image

The brilliant and very nice wizard adds, “Yes, we have created a thing which looks valuable, but it is laboratory-generated. And it is gem and a deeply flawed one, not something we can use to sell advertising yet”. Thanks, MSFT Copilot Bing thing. Good enough and I liked the unasked for ethnic nuance.

But there is just one story: Google nuked billions in market value and created the meme of the week by making many images the heart and soul of diversity. Pundits wanted one half of the Sundar and Prabhakar comedy show yanked off the stage. Check out Stratechery’s view of Google management’s grasp of leading the company in a positive manner in Gemini and Google’s Culture. The screw up was so bad that even the world’s favorite expert in aircraft refurbishment and modern gas-filled airships spoke up. (Yep, that’s the estimable Sergey Brin!)

In the aftermath of a brilliant PR move, CNBC ran a story yesterday that summed up the February 26 to March 1 Google experience. The title was “Google Co-Founder Sergey Brin Says in Rare Public Appearance That Company ‘Definitely Messed Up’ Gemini Image Launch.” What an incisive comment from one of the father of “clever” methods of determining relevance. The article includes this brilliant analysis:

He also commented on the flawed launch last month of Google’s image generator, which the company pulled after users discovered historical inaccuracies and questionable responses. “We definitely messed up on the image generation,” Brin said on Saturday. “I think it was mostly due to just not thorough testing. It definitely, for good reasons, upset a lot of people.”

That’s the Google “gem.” Amazing.

Stephen E Arnold, March 5, 2024

Techno Bashing from Thumb Typers. Give It a Rest, Please

March 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Every generation says that the latest cultural and technological advancements make people stupider. Novels were trash, the horseless carriage ruined traveling, radio encouraged wanton behavior, and the list continues. Everything changed with the implementation of television aka the boob tube. Too much television does cause cognitive degradation. In layman’s terms, it means the brain goes into passive functioning rather than actively thinking. It would be almost a Zen moment. Addiction is fun for some.

The introduction of videogames, computers, and mobile devices augmented the decline of brain function. The combination of AI-chatbots and screens, however, might prove to be the ultimate dumbing down of humans. APA PsycNet posted a new study by Umberto León-Domínguez called, “Potential Cognitive Risks Of Generative Transformer-Based AI-Chatbots On Higher Order Executive Thinking.”

Psychologists already discovered that spending too much time on a screen (i.e. playing videogames, watching TV or YouTube, browsing social media, etc.) increases the risk of depression and anxiety. When that is paired with AI-chatbots, or programs designed to replicate the human mind, humans rely on the algorithms to think for them.

León-Domínguez wondered if too much AI-chatbot consumption impaired cognitive development. In his abstract he invented some handy new terms that:

“The “neuronal recycling hypothesis” posits that the brain undergoes structural transformation by incorporating new cultural tools into “neural niches,” consequently altering individual cognition. In the case of technological tools, it has been established that they reduce the cognitive demand needed to solve tasks through a process called “cognitive offloading.” Cognitive offloading”perfectly describes younger generations and screen addicts. “Cultural tools into neural niches” also respects how older crowds view new-fangled technology, coupled with how different parts of the brain are affected with technology advancements. The modern human brain works differently from a human brain in the 18th-century or two thousand years ago.

He found:

“The pervasive use of AI chatbots may impair the efficiency of higher cognitive functions, such as problem-solving. Importance: Anticipating AI chatbots’ impact on human cognition enables the development of interventions to counteract potential negative effects. Next Steps: Design and execute experimental studies investigating the positive and negative effects of AI chatbots on human cognition.”

Are we doomed? No. Do we need to find ways to counteract stupidity? Yes. Do we know how it will be done? No.

Isn’t tech fun?

Whitney Grace, March 6, 2024

SearXNG: A New Metasearch Engine

March 4, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Internet browsers and search engines are two of the top applications used on computers. Search engine giants like Bing and Google don’t respect users’ privacy and they track everything. They create individual user profiles then sell and use the information for targeted ads. The search engines also demote controversial information and return biased search results. On his blog, FlareXes shares a solution that protects privacy and encompasses metasearch: “Build Your Own Private Search Engine With SearXNG.”

SearXNG is an open source, customizable metasearch engine that returns search results from multiple sources and respects privacy. It was originally built off another open source project SearX. SearXNG has an extremely functional user interface. It also aggregates information from over seventy search engines, including DuckDuckGo, Brave Search, Bing, and Google.

The best thing about SearXNG is protecting user privacy: But perhaps the best thing about SearXNG is its commitment to user privacy. Unlike some search engines, SearXNG doesn’t track users or generate personalized profiles, and it never shares any information with third parties.”

Because SearXNG is a metasearch engine, it supports organic search results. This allows users to review information that would otherwise go unnoticed. That doesn’t mean the returns will allegedly be unbiased. The idea is that SearXNG returns better results than a revenue juggernaut:

SearXNG aggregates data from different search engines that doesn’t mean this could be biased. There is no way for Google to create a profile about you if you’re using SearXNG. Instead, you get high-quality results like Google or Bing. SearXNG also randomizes the results so no SEO or top-ranking will not gonna work. You can also enable independent search engines like Brave Search, Mojeek etc.”

If you want a search engine that doesn’t collect your personal data and has betters search results, warrants a test drive. The installation may require some tech fiddling.

Whitney Grace, March 4, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta