The High School Science Club Got Fined for Its Management Methods

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I almost missed this story. “Google Reaches $27 Million Settlement in Case That Sparked Employee Activism in Tech” which contains information about the cost of certain management methods. The write up asserts:

Google has reached a $27 million settlement with employees who accused the tech giant of unfair labor practices, setting a record for the largest agreement of its kind, according to California state court documents that haven’t been previously reported.

image

The kindly administrator (a former legal eagle) explains to the intelligent teens in the high school science club something unpleasant. Their treatment of some non sci-club types will cost them. Thanks, MSFT Copilot. Who’s in charge of the OpenAI relationship now?

The article pegs the “worker activism” on Google. I don’t know if Google is fully responsible. Googzilla’s shoulders and wallet are plump enough to carry the burden in my opinion. The article explains:

In terminating the employee, Google said the person had violated the company’s data classification guidelines that prohibited staff from divulging confidential information… Along the way, the case raised issues about employee surveillance and the over-use of attorney-client privilege to avoid legal scrutiny and accountability.

Not surprisingly, the Google management took a stand against the apparently unjust and unwarranted fine. The story notes via a quote from someone who is in the science club and familiar with its management methods::

“While we strongly believe in the legitimacy of our policies, after nearly eight years of litigation, Google decided that resolution of the matter, without any admission of wrongdoing, is in the best interest of everyone,” a company spokesperson said.

I want to point out that the write up includes links to other articles explaining how the Google is refining its management methods.

Several questions:

  • Will other companies hit by activist employees be excited to learn the outcome of Google’s brilliant legal maneuvers which triggered a fine of a mere $27 million
  • Has Google published a manual of its management methods? If not, for what is the online advertising giant waiting?
  • With more than 170,000 (plus or minus) employees, has Google found a way to replace the unpredictable, expensive, and recalcitrant employees with its smart software? (Let’s ask Bard, shall we?)

After 25 years, the Google finds a way to establish benchmarks in managerial excellence. Oh, I wonder if the company will change it law firm line up. I mean $27 million. Come on. Loose the semantic noose and make more ads “relevant.”

Stephen E Arnold, December 4, 2023

Good Fences, Right, YouTube? And Good Fences in Winter Even Better

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Remember that line from the grumpy American poet Bobby Frost. (I have on good authority that Bobby was not a charmer. And who, pray tell, was my source. How about a friend of the poet’s who worked with him in South Shaftsbury.)

Like those in the Nor’East say, “Good fences make good neighbors.”

The line is not original. Bobby’s pal told me that the saying was a “pretty common one” among the Shaftsburians. Bobby appropriated the line in his poem “Mending Wall. (It is loved by millions of high school students). The main point of the poem is that “Something there is that doesn’t love a wall.” The key is “something.”

The fine and judicious, customer centric, and well-managed outfit Google is now in the process of understanding the “something that doesn’t love a wall,” digital or stone.

Inside the Arms Race between YouTube and Ad Blockers” updates the effort of the estimable advertising outfit and — well — almost everyone. The article explains:

YouTube recently took dramatic action against anyone visiting its site with an ad blocker running — after a few pieces of content, it’ll simply stop serving you videos. If you want to get past the wall, that ad blocker will (probably) need to be turned off; and if you want an ad-free experience, better cough up a couple bucks for a Premium subscription.

The write up carefully explains that one must pay a “starting” monthly fee of $13.99 to avoid the highly relevant advertisements for metal men’s wallets, the total home gym which seems only inappropriate for a 79 year old dinobaby like me, and some type of women’s undergarment. Yeah, that ad matching to a known user is doing a bang up job in my opinion. I bet the Skim’s marketing manager is thrilled I am getting their message. How many packs of Skims do I buy in a lifetime? Zero. Yep, zero.

image

Yes, sir. Good fences make good neighbors. Good enough, MSFT Copilot. Good enough.

Okay, that’s the ad blocker thing, which I have identified as Google’s digital Battle of Waterloo in honor of a movie about everyone’s favorite French emperor, Nappy B.

But what the cited write up and most of the coverage is not focusing on is the question, “Why the user hostile move?” I want to share some of my team’s ideas about the motive force behind this disliked and quite annoying move by that company everyone loves (including the Skim’s marketing manager?).

First, the emergence of ChatGPT type services is having a growing impact on Google’s online advertising business. One can grind though Google’s financials and not find any specific item that says, “The Duke of Wellington and a crazy old Prussian are gearing up for a fight. So I will share some information we have rounded up by talking to people and looking through the data gathered about Googzilla. Specifically, users want information packaged to answer or to “appear” to answer their question. Some want lists; some want summaries; and some just want to avoid the enter the query, click through mostly irrelevant results, scan for something that is sort of close to an answer, and use that information to buy a ticket or get a Taylor Swift poster, whatever. That means that the broad trend in the usage of Google search is a bit like the town of Grindavik, Iceland. “Something” is going on, and it is unlikely to bode well for the future that charming town in Iceland. That’s the “something” that is hostile to walls. Some forces are tough to resist even by Googzilla and friends.

Second, despite the robust usage of YouTube, it costs more money to operate that service than it does to display from a cache ads and previously spidered information from Google compliant Web sites. Thus, as pressure on traditional search goes up from the ChatGPT type services, the darker the clouds on the search business horizon look. The big storm is not pelting the Googleplex yet, but it does looks ominous perched on the horizon and moving slowly. Don’t get our point wrong: Running a Google scale search business is expensive, but it has been engineered and tuned to deliver a tsunami of cash. The YouTube thing just costs more and is going to have a tough time replacing lost old-fashioned search revenue. What’s a pressured Googzilla going to do? One answer is, “Charge users.” Then raise prices. Gee, that’s the much-loved cable model, isn’t it? And the pressure point is motivating some users who are developers to find ways to cut holes in the YouTube fence. The fix? Make the fence bigger and more durable? Isn’t that a Rand arms race scenario? What’s an option? Where’s a J. Robert Oppenheimer-type when one needs him?

The third problem is that there is a desire on the part of advertisers to have their messages displayed in a non offensive context. Also, advertisers — because the economy for some outfits sucks — now are starting to demand proof that their ads are being displayed in front of buyers known to have an interest in their product. Yep, I am talking about the Skims’ marketing officer as well as any intermediary hosing money into Google advertising. I don’t want to try to convince those who are writing checks to the Google the following: “Absolutely. Your ad dollars are building your brand. You are getting leads. You are able to reach buyers no other outfit can deliver.” Want proof. Just look at this dinobaby. I am not buying health food, hidden carry holsters, and those really cute flesh colored women’s undergarments. The question is, “Are the ads just being dumped or are they actually targeted to someone who is interested in a product category?” Good question, right?

Net net: The YouTube ad blocking is shaping up to be a Google moment. Now Google has sparked an adversarial escalation in the world of YouTube ad blockers. What are Google’s options now that Googzilla is backed into a corner? Maybe Bobby Frost has a poem about it: “Some say the world will end in fire, Some say in ice.” How do Googzilla fare in the ice?

Stephen E Arnold, December 4, 2023

The RAG Snag: Convenience May Undermine Thinking for Millions

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

My understanding is that everyone who is informed about AI knows about RAG. The acronym means Retrieval Augmented Generation. One explanation of RAG appears in the nVidia blog in the essay “What Is Retrieval Augmented Generation aka RAG.” nVidia, in my opinion, loves whatever drives demand for its products.

The idea is that machine processes to minimize errors in output. The write up states:

Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.

Simplifying the idea, RAG methods gather information and perform reference checks. The checks can be performed by consulting other smart software, the Web, or knowledge bases like an engineering database. nVidia provides a “reference architecture,” which obviously relies on nVidia products.

The write up does an obligatory tour of a couple of search and retrieval systems. Why? Most of the trendy smart software are demonstrations of information access methods wrapped up in tools that think for the “searcher” or person requiring output to answer a question, explain a concept, or help the human to think clearly. (In dinosaur days, the software is performing functions once associated with a special librarian or an informed colleague who could ask questions and conduct a reference interview. I hope the dusty concepts did not make you sneeze.)

image

“Yes, young man. The idea of using multiples sources can result in learning. We now call this RAG, not research.” The young man, stunned with the insight say, “WTF?” Thanks, MSFT Copilot. I love the dual tassels. The young expert is obviously twice as intelligent as the somewhat more experienced dinobaby with the weird fingers.

The article includes a diagram which I found difficult to read. I think the simple blocks represent the way in which smart software obviates the need for the user to know much about sources, verification, or provenance about the sources used to provide information. Furthermore, the diagram makes the entire process look just like getting the location of a pizza restaurant from an iPhone (no Google Maps for me).

The highlight of the write up are the links within the article. An interested reader can follow the links for additional information.

Several observations:

  1. The emergence of RAG as a replacement for such concepts as “search”, “special librarian,” and “provenance” makes clear that finding information is a problem not solved for systems, software, and people. New words make the “old” problem appear “new” again.
  2. The push for recursive methods to figure out what’s “right” or “correct” will regress to the mean; that is, despite the mathiness of the methods, systems will deliver “acceptable” or “average” outputs. A person who thinks that software will impart genius to a user are believing in a dream. These individuals will not be living the dream.
  3. widespread use of smart software and automation means that for most people, critical thinking will become the equivalent of an appendix. Instead of mother knows best, the system will provide the framing, the context, and the implication that the outputs are correct.

RAG opens new doors for those who operate widely adopted smart software systems will have significant control over what people think and, thus, do. If the iPhone shows a pizza joint, what about other pizza joints? Just ask. The system will not show pizza joints not verified in some way. If that “verification” requires the company advertising to be in the data set, well, that’s the set of pizza joints one will see. The others? Invisible, not on the radar, and doomed to failure seem their fate.

RAG is significant because it is new speak and it marks a disassociation of “knowing” from “accepting” output information as the best and final words on a topic. I want to point out that for a small percentage of humans, their superior cognitive abilities will ensure a different trajectory. The “new elite” will become the individuals who design, shape, control, and deploy these “smart” systems.

Most people will think they are informed because they can obtain information from a device. The mass of humanity will not know how information control influences their understanding and behavior. Am I correct? I don’t know. I do know one thing: This dinobaby prefers to do knowledge acquisition the old fashioned, slow, inefficient, and uncontrolled way.

Stephen E Arnold, December 4, 2023

Health Care and Steerable AI

December 4, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Large language models are powerful tools that can be used for the betterment of humanity. Or, in the hands of for-profit entities, to get away with wringing every last penny out of a system in the most opaque and intractable ways possible. When that system manages the wellbeing of millions and millions of people, the fallout can be tragic. TechDirt charges, “’AI’ Is Supercharging our Broken Healthcare System’s Worst Tendencies.”

Reporter Karl Bode begins by illustrating the bad blend of corporate greed and AI with journalism as an example. Media companies, he writes, were so eager to cut corners and dodge unionized labor they adopted AI technology before it was ready. In that case the results were “plagiarism, bull[pucky], a lower quality product, and chaos.” Those are bad. Mistakes in healthcare are worse. We learn:

“Not to be outdone, the very broken U.S. healthcare industry is similarly trying to layer half-baked AI systems on top of a very broken system. Except here, human lives are at stake. For example UnitedHealthcare, the largest health insurance company in the US, has been using AI to determine whether elderly patients should be cut off from Medicare benefits. If you’ve ever navigated this system on behalf of an elderly loved one, you likely know what a preposterously heartless [poop]whistle this whole system already is long before automation gets involved. But a recent investigation by STAT showed the AI consistently made major errors and cut elderly folks off from needed care prematurely, with little recourse by patients or families. … A recent lawsuit filed in the US District Court for the District of Minnesota alleges that the AI in question was reversed by human review roughly 90 percent of the time.”

And yet, employees were ordered to follow the algorithm’s decisions no matter their inanity. For the few patients who did win hard-fought reversals, those decisions were immediately followed by fresh rejections that kicked them back to square one. Bode writes:

“The company in question insists that the AI’s rulings are only used as a guide. But it seems pretty apparent that, as in most early applications of LLMs, the systems are primarily viewed by executives as a quick and easy way to cut costs and automate systems already rife with problems, frustrated consumers, and underpaid and overtaxed support employees.”

But is there hope these trends will be eventually curtailed? Well, no. The write-up concludes by scoffing at the idea that government regulations or class action lawsuits are any match for corporate greed. Sounds about right.

Cynthia Murrell, December 4, 2023

Google Maps: Trust in Us. Well, Mostly

December 1, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Friday and December 1, 2023. I want to commemorate the beginning of the last month of what has been an exciting 2023. How exciting. How about a Google Maps’ story?

Navigate to “Google Maps Mistake Leaves Dozens of Families Stranded in the Desert”. Here’s the story: The outstanding and from my point of view almost unusable Google Maps directed a number of people to a “dreadful dirt path during a dust storm.”

image

“Mommy, says the teenage son, “I told you exactly what the smart map system said to do. Why are we parked in a tree?” Thanks, MSFT Copilot. Good enough art.

Hey, wait up. I thought Google had developed a super duper quantum smart weather prediction system. Is Google unable to cross correlate Google Maps with potential negative weather events?

The answer, “Who are you kidding?” Google appears to be in content marketing hyperbole “we are better at high tech” mode. Let’s not forget the Google breakthrough regarding material science. Imagine. Google’s smart software identified oodles of new materials. Was this “new” news? Nope. Computational chemists have been generating potentially useful chemical substances for — what is it now? — decades. Is the Google materials science breakthrough going to solve the problem of burned food sticking to a cookie sheet? Sure, I am waiting for the news release.

What’s up with the Google Maps?

The write up says:

Google Maps apologized for the rerouting disaster and said that it had removed that route from its platform.

Hey, that’s helpful. I assume it was a quantum answer to a “we’re smart” outfit.

I wish I had kept the folder which had my collection of Google Map news items. I do recall someone who drove off a cliff. I had my own notes about my trying to find Seymour Rubinstein’s house on a bright sunny day. The inventor of WordStar did not live in the Bay. That was the location of Mr. Rubinstein’s house, according to Google Maps. I did find the house, and I had sufficient common sense not to drive into the water. I had other examples of great mappiness, but, alas!, no longer.

Is directing a harried mother into a desert during a dust storm humorous? Maybe to some in Sillycon Valley. I am not amused. I don’t think the mother was amused because in addition to the disturbing situation, her vehicle suffered $5,000 in damage.

The question is, “Why?”

Perhaps Google’s incentive system is not aligned to move consumer products like Google Maps from “good enough” to “excellent.” And the money that could have been spent on improving Google Maps may be needed to output stories about Google’s smart software inventing new materials.

Interesting. Isn’t OpenAI and the much loved Microsoft leading the smart software mindshare race? I think so. Perhaps Maps’ missteps are signal about management misalignment and deep issues within the Alphabet Google YouTube inferiority complex?

Stephen E Arnold, December 1, 2023

AI Adolescence Ascendance: AI-iiiiii!

December 1, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The monkey business of smart software has revealed its inner core. The cute high school essays and the comments about how to do search engine optimization are based on the fundamental elements of money, power, and what I call ego-tanium. When these fundamental elements go critical, exciting things happen. I know this this assertion is correct because I read “The AI Doomers Have Lost This Battle”, an essay which appears in the weird orange newspaper The Financial Times.

The British bastion of practical financial information says:

It would be easy to say that this chaos showed that both OpenAI’s board and its curious subdivided non-profit and for-profit structure were not fit for purpose. One could also suggest that the external board members did not have the appropriate background or experience to oversee a $90bn company that has been setting the agenda for a hugely important technology breakthrough.

In my lingo, the orange newspaper is pointing out that a high school science club management style is like a burning electric vehicle. Once ignited, the message is, “Stand back, folks. Let it burn.”

image

“Isn’t this great?” asks the driver. The passenger, a former Doomsayer, replies, “AIiiiiiiiiii.” Thanks MidJourney, another good enough illustration which I am supposed to be able to determine contains copyrighted material. Exactly how? may I ask. Oh, you don’t know.

The FT picks up a big-picture idea; that is, smart software can become a problem for humanity. That’s interesting because the book “Weapons of Math Destruction” did a good job of explaining why algorithms can go off the rails. But the FT’s essay embraces the idea of software as the Terminator with the enthusiasm of the crazy old-time guy who shouted “Eureka.”

I note this passage:

Unfortunately for the “doomers”, the events of the last week have sped everything up. One of the now resigned board members was quoted as saying that shutting down OpenAI would be consistent with the mission (better safe than sorry). But the hundreds of companies that were building on OpenAI’s application programming interfaces are scrambling for alternatives, both from its commercial competitors and from the growing wave of open-source projects that aren’t controlled by anyone. AI will now move faster and be more dispersed and less controlled. Failed coups often accelerate the thing that they were trying to prevent.

Okay, the yip yap about slowing down smart software is officially wrong. I am not sure about the government committees’ and their white papers about artificial intelligence. Perhaps the documents can be printed out and used to heat the camp sites of knowledge workers who find  themselves out of work.

I find it amusing that some of the governments worried about smart software are involved in autonomous weapons. The idea of a drone with access to a facial recognition component can pick out a target and then explode over the person’s head is an interesting one.

Is there a connection between the high school antics of OpenAI, the hand-wringing about smart software, and the diffusion of decider systems? Yes, and the relationship is one of those hockey stick curves so loved by MBAs from prestigious US universities. (Non reproducibility and a fondness for Jeffrey Epstein-type donors is normative behavior.)

Those who want to cash in on the next Big Thing are officially in the 2023 equivalent of the California gold rush. Unlike the FT, I had no doubt about the ascendance of the go-fast approach to technological innovation. Technologies, even lousy ones, are like gerbils. Start with a two or three and pretty so there are lots of gerbils.

Will the AI gerbils and the progeny be good or bad. Because they are based on the essential elements of life — money, power, and ego-tanium — the outlook is … exciting. I am glad I am a dinobaby. Too bad about the Doomers, who are regrouping to try and build shield around the most powerful elements now emitting excited particles. The glint in the eyes of Microsoft executives and some venture firms are the traces of high-energy AI emissions in the innovators’ aqueous humor.

Stephen E Arnold, December 1, 2023

Deepfakes: Improving Rapidly with No End in Sight

December 1, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The possible applications of AI technology are endless and we’ve barely imagined the opportunities. While tech experts mainly focus on the benefits of AI, bad actors are concentrating how to use them for illegal activities. The Next Web explains how bad actors are using AI for scams, “Deepfake Fraud Attempts Are Up 3000% In 2023-Here’s Why.” Bad actors are using cheap and widely available AI technology to create deepfake content for fraud attempts.

According to Onfido, an ID verification company in London, reports that deepfake scams increased by 31% in 2023. It’s an entire 3000% year-on-year gain. The AI tool of choice for bad actors is face-swapping apps. They range in quality from a bad copy and paste job to sophisticated, blockbuster quality fakes. While the crude attempts are laughable, it only takes one successful facial identity verification for fraudsters to win.

The bad actors concentrate on quantity over quality and account for 80.3% of attacks in 2023. Biometric information is a key component to stop fraudsters:

“Despite the rise of deepfake fraud, Onfido insists that biometric verification is an effective deterrent. As evidence, the company points to its latest research. The report found that biometrics received three times fewer fraudulent attempts than documents. The criminals, however, are becoming more creative at attacking these defenses. As GenAI tools become more common, malicious actors are increasingly producing fake documents, spoofing biometric defenses, and hijacking camera signals.”

Onfido suggests using “liveness” biometrics in verification technology. Liveness determines if a user if actually present instead of a deepfake, photo, recording, or masked individual.

As AI technology advances so will bad actors in their scams.

Whitney Grace, December 1, 2023

Google and X: Shall We Again Love These Bad Dogs?

November 30, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Two stories popped out of my blah newsfeed this morning (Thursday, November 30, 2023). I want to highlight each and offer a handful of observations. Why? I am a dinobaby, and I remember the adults who influenced me telling me to behave, use common sense, and follow the rules of “good” behavior. Dull? Yes. A license to cut corners and do crazy stuff? No.

The first story, if it is indeed accurate, is startling. “Google Caught Placing Big-Brand Ads on Hardcore Porn Sites, Report Says” includes a number of statements about the Google which make me uncomfortable. For instance:

advertisers who feel there’s no way to truly know if Google is meeting their brand safety standards are demanding more transparency from Google. Ideally, moving forward, they’d like access to data confirming where exactly their search ads have been displayed.

Where are big brand ads allegedly appearing? How about “undesirable sites.” What comes to mind for me is adult content. There are some quite sporty ads on certain sites that would make a Methodist Sunday school teacher blush.

image

These two big dogs are having a heck of a time ruining the living room sofa. Neither dog knows that the family will not be happy. These are dogs, not the mental heirs of Immanuel Kant. Thanks, MSFT Copilot. The stuffing looks like soap bubbles, but you are “good enough,” the benchmark for excellence today.

But the shocking factoid is that Google does not provide a way for advertisers to know where their ads have been displayed. Also, there is a possibility that Google shared ad revenue with entities which may be hostile to the interests of the US. Let’s hope that the assertions reported in the article are inaccurate. But if the display of big brand ads on sites with content which could conceivably erode brand value, what exactly is Google’s system doing? I will return to this question in the observations section of this essay.

The second article is equally shocking to me.

Elon Musk Tells Advertisers: ‘Go F*** Yourself’” reports that the EV and rocket man with a big hole digging machine allegedly said about advertisers who purchase promotions on X.com (Twitter?):

Don’t advertise,” … “If somebody is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself. Go f*** yourself. Is that clear? I hope it is.” … ” If advertisers don’t return, Musk said, “what this advertising boycott is gonna do is it’s gonna kill the company.”

The cited story concludes with this statement:

The full interview was meandering and at times devolved into stream of consciousness responses; Musk spoke for triple the time most other interviewees did. But the questions around Musk’s own actions, and the resulting advertiser exodus — the things that could materially impact X — seemed to garner the most nonchalant answers. He doesn’t seem to care.

Two stories. Two large and successful companies. What can a person like myself conclude, recognizing that there is a possibility that both stories may have some gaps and flaws:

  1. There is a disdain for old-fashioned “values” related to acceptable business practices
  2. The thread of pornography and foul language runs through the reports. The notion of well-crafted statements and behaviors is not part of the Google and X game plan in my view
  3. The indifference of the senior managers at both companies seeps through the descriptions of how Google and X operate strikes me as intentional.

Now why?

I think that both companies are pushing the edge of business behavior. Google obviously is distributing ad inventory anywhere it can to try and create a market for more ads. Instead of telling advertisers where their ads are displayed or giving an advertiser control over where ads should appear, Google just displays the ads. The staggering irrelevance of the ads I see when I view a YouTube video is evidence that Google knows zero about me despite my being logged in and using some Google services. I don’t need feminine undergarments, concealed weapons products, or bogus health products.

With X.com the dismissive attitude of the firm’s senior management reeks of disdain. Why would someone advertise on a system which  promotes behaviors that are detrimental to one’s mental set up?

The two companies are different, but in a way they are similar in their approach to users, customers, and advertisers. Something has gone off the rails in my opinion at both companies. It is generally a good idea to avoid riding trains which are known to run on bad tracks, ignore safety signals, and demonstrate remarkably questionable behavior.

What if the write ups are incorrect? Wow, both companies are paragons. What if both write ups are dead accurate? Wow, wow, the big dogs are tearing up the living room sofa. More than “bad dog” is needed to repair the furniture for living.

Stephen E Arnold, November 30, 2023

Google Maps: Rapid Progress on Un-Usability

November 30, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I read a Xhitter.com post about Google Maps. Those who have either heard me talk about the “new” Google Maps or who have read some of my blog posts on the subject know my view. The current Google Maps is useless for my needs. Last year, as one of my team were driving to a Federal secure facility, I bought an overpriced paper map at one of the truck stops. Why? I had no idea how to interact with the map in a meaningful way. My recollection was that I could coax Google Maps and Waze to be semi-helpful. Now the Google Maps’s developers have become tangled in a very large thorn bush. The team discusses how large the thorn bush is, how sharp the thorns are, and how such a large thorn bush could thrive in the Googley hot house.

11 23 grannie and nav 2

This dinobaby expresses some consternation at [a] not knowing where to look, [b] how to show the route, and [c] not cause a motor vehicle accident. Thanks, MSFT Copilot. Good enough I think.

The result is enhancements to Google Maps which are the digital equivalent of skin cancer. The disgusting result is a vehicle for advertising and engagement that no one can use without head scratching moments. Am I alone in my complaint. Nope, the afore mentioned Xhitter.com post aligns quite well with my perception. The author is a person who once designed a more usable version of Google Maps.

Her Xhitter.com post highlights the digital skin cancer the team of Googley wizards has concocted. Here’s a screen capture of her annotated, life-threatening disfigurement:

image

She writes:

The map should be sacred real estate. Only things that are highly useful to many people should obscure it. There should be a very limited number of features that can cover the map view. And there are multiple ways to add new features without overlaying them directly on the map.

Sounds good. But Xooglers and other outsiders are not likely to get much traction from the Map team. Everyone is working hard at landing in the hot AI area or some other discipline which will deliver a bonus and a promotion. Maps? Nope.

The former Google Maps’ designer points out:

In 2007, I was 1 of 2 designers on Google Maps. At that time, Maps had already become a cluttered mess. We were wedging new features into any space we could find in the UI. The user experience was suffering and the product was growing increasingly complicated. We had to rethink the app to be simple and scale for the future.

Yep, Google Maps, a case study for people who are brilliant who have lost the atlas to reality. And “sacred” at Google? Ad revenue, not making dear old grandma safer when she drives. (Tesla, Cruise, where are those smart, self-driving cars? Ah, I forgot. They are with Waymo, keeping their profile low.)

Stephen E Arnold, November 30, 2023

Omegle: Hasta La Vista

November 30, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

In the Internet’s early days, users could sign into chatrooms and talk with strangers. While chatrooms have fallen out of favor, the idea of talking with strangers hung on but now it’s accompanied by video. Chat Roulette and Omegle are popular chatting applications that allow users to video chat with random individuals. The apps are notorious for pranks and NSFW content, including child sexual abuse. The Independent shared a story about one of the two: “Omegle Anonymous Chat App Shuts Down After 14 Years.”

Omegle had a simple concept: sign in, be connected to another random person, and video chat for as long as you like. Leif K-Brooks launched the chat platform with good intentions in 2009, but it didn’t take long for bad actors to infiltrate it. K-Brooks tried to stop criminal activities on Omegle with features, such as the “monitored chats” with moderators. They didn’t work and Omegle continued to receive flack. K-Brooks doesn’t want to deal with the criticism anymore:

“The intensity of the fight over use of the site had forced him to decide to shut it down, he said, and it will stop working straight away. ‘As much as I wish circumstances were different, the stress and expense of this fight – coupled with the existing stress and expense of operating Omegle, and fighting its misuse – are simply too much. Operating Omegle is no longer sustainable, financially nor psychologically. Frankly, I don’t want to have a heart attack in my 30s,’ wrote Leif K-Brooks, who has run the website since founding it.”

Omegle’s popularity rose during the pandemic. The sudden popularity surge highlighted the criminal acts on the video chat platform. K-Brooks believes that his critics used fear to shut down the Web site. He also acknowledged that people are quicker to attack and slower to recognize shared humanity. He theorizes that social media platforms are being labeled negatively because of small groups of bad actors.

Whitney Grace, November 30, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta