KPIs: The Perfect Tool for Slacker Managers

September 22, 2023

Many businesses have adopted key performance indicators (KPIs) in an effort to minimize subjectivity in human resource management. Cognitive researcher and Promaton CTO Ágoston Török explores the limitations of this approach in his blog post, “How to Avoid KPI Psychosos in your Organization?

Török takes a moment to recall the human biases KPIs are meant to avoid: availability bias, recency bias, the halo/horn effects, overconfidence bias, anchoring bias, and the familiar confirmation bias. He writes:

“Enter KPIs as the objective truth. Free of subjectivity, perfect, right? Not so fast. In fact, often our data collection and measurement are also biased by us (e.g. algorithmic bias). And even if that is not the case, unfortunately, KPIs suffer from tunnel vision: they measure what is measurable, while not necessarily all aspects of the situation are. Albert Einstein put it brilliantly: ‘Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.’ This results in perverse motivation in many organizations, where people have to choose between doing their job well (broader reality) or getting promoted for meeting the KPIs (tunnel vision). And that’s exactly the KPI psychosis I described above.”

That does defeat the purpose. Not surprisingly, the solution is to augment KPI software with human judgment.

“KPIs should be used in combination with human intuition to enable optimal decision-making. So not just intuition or data, but a constant back and forth of making (i.e. intuition) and testing (i.e. data) hypotheses. … So you work on reaching your objective and while doing so you constantly check both what your KPI shows and also how much you can rely on it.”

That sounds like a lot of work. Can’t we just offload personnel decisions to AI and be done with it? Not yet, dear executives, not yet.

Cynthia Murrell, September 22, 2023

Amazon Switches To AI Review Summaries

September 22, 2023

The online yard sale eBay offers an AI-generated description feature for sellers. Following in the same vein, Engadget reports that, “Amazon Begins Rolling Out AI-Generated Review Summaries” for products with clickable keywords. Amazon announced in June 2023 that it was testing an AI summary tool across a a range of products. The company officially launched the tool in August declaring that AI is at the heart of Amazon.

Amazon developed the AI summary tool so consumers can read buyers’ opinions without scrolling through pages of information. The summaries are described as a wrap-up of customer consensus akin to film blurbs on Rotten Tomatoes. The AI summaries contain clickable tags that showcase common words and consistent themes from reviews. Clicking on the tags will take consumers to the full review with the information.

AI-generated review summaries bring up another controversial topic: Amazon and fake reviews. Fake reviews litter the selling platform like a slew of counterfeit products Amazon, eBay, and other online selling platforms battle. While Amazon claims it takes a proactive stance to detect and delete the reviews, it does not catch all the fakes. It is speculated that AI-generated reviews from ChatGPT or other chatbots are harder for Amazon to catch.

In regards to using its own AI summary tool, Amazon plans to only use it on verified purchases and using more AI models to detect fake reviews. Humans will be used for clarification with their more discerning organic brains. Amazon said about its news tool:

“‘We continue to invest significant resources to proactively stop fake reviews,’ Amazon Community Shopping Director Vaughn Schermerhorn said. ‘This includes machine learning models that analyze thousands of data points to detect risk, including relations to other accounts, sign-in activity, review history, and other indications of unusual behavior, as well as expert investigators that use sophisticated fraud-detection tools to analyze and prevent fake reviews from ever appearing in our store. The new AI-generated review highlights use only our trusted review corpus from verified purchases, ensuring that customers can easily understand the community’s opinions at a glance.’”

AI tools are trained using language models that contain known qualitative errors. The same AI tools are used to teach more AI and so on. While we do not know what Amazon is using to train its AI summary tool, we would not be surprised if the fake reviews are using similar training models to Amazon’s. It will come down to Amazon AI vs. counterfeit AI. Who will win?

Whitney Grace, September 22, 2023

Kill Off the Dinobabies and Get Younger, Bean Counter-Pleasing Workers. Sound Familiar?

September 21, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Google, Meta, Amazon Hiring low-Paid H1B Workers after US Layoffs: Report.” Is it accurate? Who knows? In the midst of a writers’ strike in Hollywood, I thought immediately about endless sequels to films like “Batman 3: Deleting Robin” and Halloween 8: The Night of the Dinobaby Purge.”

The write up reports a management method similar to those implemented when the high school science club was told that a school field trip to the morgue was turned down. The school’s boiler suffered a mysterious malfunction and school was dismissed for a day. Heh heh heh.

I noted this passage:

Even as global tech giants are carrying out mass layoffs, several top Silicon Valley companies are reportedly looking to hire lower-paid tech workers from foreign countries. Google, Meta, Amazon, Microsoft, Zoom, Salesforce and Palantir have applied for thousands of H1B worker visas this year…

I heard a rumor that IBM used a similar technique. Would Big Blue replace older, highly paid employees with GenX professionals not born in the US? Of course not! The term “dinobabies” was a product of spontaneous innovation, not from a personnel professional located in a suburb of New York City. Happy bean counters indeed. Saving money with good enough work. I love the phrase “minimal viable product” for “minimally viable” work environments.

There are so many ways to allow people to find their futures elsewhere. Shelf stockers are in short supply I hear.

Stephen E Arnold, September 21, 2023

Just TikToking Along, Folks

September 21, 2023

Beleaguered in the US, its largest market, TikTok is ready to embrace new options in its Southeast Asian advance. CNBC reports, “TikTok Shop Strikes ‘Buy Now, Pay Later’ Partnership in Malaysia As Part of E-Commerce Push.” Writer Cheila Chiang reports:

“The partnership comes as TikTok looks to markets outside of the U.S. for growth. While the U.S. is the company’s largest market, TikTok faces headwinds there after Montana became the first state to ban the app. The app has also been banned in India. In recent months, TikTok Shop has been aggressively expanding into e-commerce in Southeast Asia, competing against existing players like Sea’s Shopee and Alibaba’s Lazada. TikTok’s CEO previously said the company will pour ‘billions of dollars’ into Southeast Asia over the next few years. As of April, TikTok said it has more than 325 million monthly users in Southeast Asia. In June, the company said it would invest $12.2 million to help over 120,000 small and medium-sized businesses sell online. The investment consists of cash grants, digital skills training and advertising credits for these businesses.”

What a great idea for the teenagers who are the largest cohort of TikTok users. Do they fully grasp the pay later concept and its long-term effects? Sure, no problem. Kids love to work at part time jobs, right? As long as major corporations get to expand as desired, that is apparently all that matters.

Cynthia Murrell, September 21, 2023

Those 78s Will Sell Big Again?

September 21, 2023

The Internet Archive (IA) is a wonderful repository of digital informational, but it is a controversial organization about respecting copyright laws. After battling a landmark case against book publishers, the IA is now facing another lawsuit as reported in the post, “Internet Archive Responds To Recording Industry Lawsuit Targeting Obsolete Media.” Sony, Universal Music Group, and other large record labels are suing the IA and others for the Great 78 Project.

The Great 78 Project’s goal is to preserve, research, discover, and share 78 rpm records that are 70-120 years old. Librarians, archivists, and sound engineers combined their resources to preserve the archaic, analog medium and provide free public access. The preserved recordings are used for researching teaching at museums, universities, and more:

“Statement from Brewster Kahle, digital librarian of the Internet Archive:When people want to listen to music they go to Spotify. When people want to study 78rpm sound recordings as they were originally created, they go to libraries like the Internet Archive. Both are needed. There shouldn’t be conflict here.’”

Preserving an old yet appreciated medium is worthwhile and a labor of love. IA’s blog post fails to explain the details behind the lawsuit or defend the Great 78 Project other than restating its purpose. The IA should share the details about how the record companies are concerned about copyrighted material but many of the recordings are now in the public domain. The Great 78 Project should continue but the record companies should work with the preservation team instead of fighting them in court.

Whitney Grace, September 21, 2023

Microsoft Claims to Bring Human Reasoning to AI with New Algorithm

September 20, 2023

Has Microsoft found the key to meld the strengths of AI reasoning and human cognition? Decrypt declares, “Microsoft Infuses AI with Human-Like Reasoning Via an ‘Algorithm of Thoughts’.” Not only does the Algorithm of Thoughts (AoT for short) come to better conclusions, it also saves energy by streamlining the process, Microsoft promises. Writer Jose Antonio Lanz explains:

“The AoT method addresses the limitations of current in-context learning techniques like the ‘Chain-of-Thought’ (CoT) approach. CoT sometimes provides incorrect intermediate steps, whereas AoT guides the model using algorithmic examples for more reliable results. AoT draws inspiration from both humans and machines to improve the performance of a generative AI model. While humans excel in intuitive cognition, algorithms are known for their organized, exhaustive exploration. The research paper says that the Algorithm of Thoughts seeks to ‘fuse these dual facets to augment reasoning capabilities within LLMs.’ Microsoft says this hybrid technique enables the model to overcome human working memory limitations, allowing more comprehensive analysis of ideas. Unlike CoT’s linear reasoning or the ‘Tree of Thoughts’ (ToT) technique, AoT permits flexible contemplation of different options for sub-problems, maintaining efficacy with minimal prompting. It also rivals external tree-search tools, efficiently balancing costs and computations. Overall, AoT represents a shift from supervised learning to integrating the search process itself. With refinements to prompt engineering, researchers believe this approach can enable models to solve complex real-world problems efficiently while also reducing their carbon impact.”

Wowza! Lanz expects Microsoft to incorporate AoT into its GPT-4 and other advanced AI systems. (Microsoft has partnered with OpenAI and invested billions into ChatGPT; it has an exclusive license to integrate ChatGPT into its products.) Does this development bring AI a little closer to humanity? What is next?

Cynthia Murrell, September 20, 2023

AI May Be Like a Red, Red Rose: Fading Fast? Oh, No

September 20, 2023

Well that was fast. Vox ponders, “Is the AI Boom Already Over?” Reporter Sara Morrison recounts generative AI’s adventure over the past year, from the initial wonder at tools like ChatGPT and assorted image generators to the sky-high investments in AI companies. Now, though, the phenomenon may be drifting back to Earth. Morrison writes:

“Several months later, the bloom is coming off the AI-generated rose. Governments are ramping up efforts to regulate the technology, creators are suing over alleged intellectual property and copyright violations, people are balking at the privacy invasions (both real and perceived) that these products enable, and there are plenty of reasons to question how accurate AI-powered chatbots really are and how much people should depend on them. Assuming, that is, they’re still using them. Recent reports suggest that consumers are starting to lose interest: The new AI-powered Bing search hasn’t made a dent in Google’s market share, ChatGPT is losing users for the first time, and the bots are still prone to basic errors that make them impossible to trust. In some cases, they may be even less accurate now than they were before. A recent Pew survey found that only 18 percent of US adults had ever used ChatGPT, and another said they’re becoming increasingly concerned about the use of AI. Is the party over for this party trick?”

The post hastens to add that generative AI is here to stay. It is just that folks are a bit less excited about it. Besides Bing’s mediocre AI showing, cited above, the article supplies examples of several other disappointing projects. One key reason for decline is generative AI’s tendency to simply get things wrong. Many hoped this issue would soon be resolved, but it may actually be getting worse. Other problems, of course, include that stubborn bias problem and inappropriate comments. Until its many flaws are resolved, Morrison observes, generative AI should probably remain no more than a party trick.

Cynthia Murrell, September 20, 2023

Big Tech: Your Money or Your Digital Life? We Are Thinking

September 20, 2023

Why is anyone surprised that big tech companies want to exploit AI for profit? Business Insider gives a quick rundown on how big tech advertised AI as beneficial research tool while now they are prioritizing it as commercial revenue tool in, “Silicon Valley Presented AI As A Noble Research Tool. Now It’s All About Cold, Hard Cash.”

Big tech companies presented AI research as a societal boon and would share the findings with everyone. The research was done without worrying about costs and it is the ideal situation or ultimate discovery. Google wrote off $1.3 million of DeepMind’s debt to demonstrate its commitment to advancing AI research.

As inflation rises, big tech companies are worried about their bottom lines. ChatGPT and similar algorithms has made significant headway in AI science, so big tech companies are eager to exploit it for money. Big tech companies are racing to commercialize chatbots by promoting the benefits with consumers. Competitors are forced to develop their own chatbots or lose business.

Meta is prioritizing AI research but ironically sacked a team researching protein folding. Meta wants to cut the fat to concentrate on profits. Unfortunately the protein folding was axed despite how understanding protein folding could help scientists understand diseases, such as Parkinson’s and Alzheimer’s.

Google is focusing on net profits too. One good example is a new DeepMind unit that shares AI research papers to improve people’s lives as well as products. Google did make the new large language model Llama 2 an open source tool for businesses with fewer than 700 million monthly active users. Google continues to output smart chatbots. Hey, students, isn’t that helpful?

It is unfortunate that humans are inherently selfish beings. If we did everything for the benefit of society it would be great, but history has shown socialism and communism does not work. There is a way to fund exploratory research without worrying about money. We just have not found it yet.

Whitney Grace, September 20, 2023

Gemini Cricket: Another World Changer from the Google

September 19, 2023

AI lab DeepMind, acquired by Google in 2014, is famous for creating AlphaGo, a program that defeated a human champion Go player in 2016. Since then, its developers have been methodically honing their software. Meanwhile, ChatGPT exploded onto the scene and Google is feeling the pressure to close the distance. Wired reports, “Google DeepMind CEO Demis Hassabis Says Its Next Algorithm Will Eclipse ChatGPT.” We learn the company just combined the DeepMind division with its Brain lab. The combined team hopes its Gemini software will trounce the competition. Someday. Writer Will Knight tells us:

“DeepMind’s Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems. … AlphaGo was based on a technique DeepMind has pioneered called reinforcement learning, in which software learns to take on tough problems that require choosing what actions to take like in Go or video games by making repeated attempts and receiving feedback on its performance. It also used a method called tree search to explore and remember possible moves on the board.”

Not ones to limit themselves, the Googley researchers may pilfer ideas from other AI realms like robotics and neuroscience. Hassabis is excited about the possibilities AI offers when wielded for good, but acknowledges the need to mitigate potential risks. The article relates:

“One of the biggest challenges right now, Hassabis says, is to determine what the risks of more capable AI are likely to be. ‘I think more research by the field needs to be done—very urgently—on things like evaluation tests,’ he says, to determine how capable and controllable new AI models are. To that end, he says, DeepMind may make its systems more accessible to outside scientists.”

Transparency in AI? That may be the CEO’s most revolutionary idea yet.

Cynthia Murrell, September 19, 2023

Google: Privacy Is Number One?

September 19, 2023

Big tech companies like Google do not respect users’ privacy rights. Yes, these companies have privacy statements and other legal documents that state they respect individuals’ privacy but it is all smoke and mirrors. The Verge has the lowdown on a privacy lawsuit filed against Google and a judge’s recent decision: “$5 Billion Google Lawsuit Over ‘Incognito Mode’ Tracking Moves A Step Closer To Trial.”

Chasom Brown, Willian Byatt, Jeremy Davis, Christopher Castillo, and Monique Trujillo filed a class action lawsuit against Google for collecting user information while in “incognito mode.” Publicly known as Chasom Brown, et. Al v. Google, the plaintiffs seek $5 billion in damages. Google requested a summary judgment, but Judge Yvonne Gonzalez Rogers of California denied it.

Judge Gonzalez noted that statements in the Chrome privacy nonie, Privacy Policy, Incognito Splash Screen, and Search & Browse Privately Help page explains how Incognito mode limits information and how people can control what information is shared. The judge wants the court to decide if these notices act as a binding agreement between Google and users that the former would not collect users’ data when they browsed privately.

Google disputes the claims and state that every time a new incognito tab is opened, Web sites might collect user information. There are other issues the plaintiffs and judge want to discuss:

“Another issue going against Google’s arguments that the judge mentioned is that the plaintiffs have evidence Google ‘stores users’ regular and private browsing data in the same logs; it uses those mixed logs to send users personalized ads; and, even if the individual data points gathered are anonymous by themselves, when aggregated, Google can use them to ‘uniquely identify a user with a high probability of success.’’

She also responded to a Google argument that the plaintiffs didn’t suffer economic injury, writing that ‘Plaintiffs have shown that there is a market for their browsing data and Google’s alleged surreptitious collection of the data inhibited plaintiffs’ ability to participate in that market…Finally, given the nature of Google’s data collection, the Court is satisfied that money damages alone are not an adequate remedy. Injunctive relief is necessary to address Google’s ongoing collection of users’ private browsing data.’”

Will Chasom Brown, et. Al v. Google go anywhere beyond the California court? Will the rest of the United States and other countries that have a large Google market, the European Union, do anything?

Whitney Grace, September 19, 2023

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta