RIFed by AI? Do Not Give Hope Who Enter There
April 18, 2024
Rest assured, job seekers, it is not your imagination. Even those with impressive resumes are having trouble landing an interview, never mind a position. Case in point, Your Tango shares, “Former Google Employee Applies to 50 Jobs that He’s Overqualified For and Tracks the Alarming Number of Rejections.” Wrier Nia Tipton summarizes a pair of experiments documented on TikTok by ex-Googler Jonathan Javier. He found prospective employers were not impressed with his roles at some of the biggest tech firms in the world. In fact, his years of experience may have harmed his chances: his first 50 applications were designed to see how he would fare as an overqualified candidate. Most companies either did not respond or rejected him outright. He was not surprised. Tipton writes:
“Javier explained that recruiters are seeing hundreds of applications daily. ‘For me, whenever I put a job break out, I get about 30 to 50 every single day,’ he said. ‘So again, everybody, it’s sometimes not your resume. It’s sometimes that there’s so many qualified candidates that you might just be candidate number two and number three.’”
So take heart, applicants, rejections do not necessarily mean you are not worthy. There are just not enough positions to go around. The write-up points to February numbers from the Bureau of Labor Statistics that show that, while the number of available jobs has been growing, so is the unemployment rate. Javier’s experimentation continued:
“In another TikTok video, Jonathan continued his experiment and explained that he applied to 50 jobs with two similar resumes. The first resume showed that he was overqualified, while the other showed that he was qualified. Jonathan quickly received 24 rejections for the overqualified resume, while he received 15 rejections for the qualified resume. Neither got him any interviews. Something interesting that Javier noted was how fast he was rejected with his overqualified resume. From this, he observed that overqualified candidates are often overlooked in favor of candidates that fit 100% of the qualities they are looking for. ‘That’s unfortunate because it creates a bias for people who might be older or who might have a lot more experience, but they’re trying to transition into a specific industry or a new position,’ he said.”
Ouch. It is unclear what, if anything, can be done about this specificity bias in hiring. It seems all one can do is keep trying. But, not that way.
Cynthia Murrell, April 18, 2024
Kagi Search Beat Down
April 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
People surprise me. It is difficult to craft a search engine. Sure, a recent compsci graduate will tell you, “Piece of cake.” It is not. Even with oodles of open source technology, easily gettable content, and a few valiant individuals who actually want relevant results — search and retrieval are tough to get right. The secret to good search, in my opinion, is to define a domain, preferably a technical field, identify the relevant content, obtain rights, if necessary, and then do the indexing and the other “stuff.”
In my experience, it is a good idea to have either a friend with deep pockets, a US government grant (hello, NSF, said Google decades ago), or a credit card with a hefty credit line. Failing these generally acceptable solutions, one can venture into the land of other people’s money. When that runs out or just does not work, one can become a pay-to-play outfit. We know what that business model delivers. But for a tiny percentage of online users, a subscription service makes perfect sense. The only problem is that selling subscriptions is expensive, and there is the problem of churn. Lose a customer and spend quite a bit of money replacing that individual. Lose big customers spend oodles and oodles of money replacing that big spender.
I read “Do Not Use Kagi.” This, in turn, directed me to “Why I Lost Faith in Kagi.” Okay, what’s up with the Kagi booing? The “Lost Faith” article runs about 4,000 words. The key passage for me is:
Between the absolute blasé attitude towards privacy, the 100% dedication to AI being the future of search, and the completely misguided use of the company’s limited funds, I honestly can’t see Kagi as something I could ever recommend to people.
I looked at Kagi when it first became available, and I wrote a short email to the “Vlad” persona. I am not sure if I followed up. I was curious about how the blend of artificial intelligence and metasearch was going to deal with such issues as:
- Deduplication of results
- Latency when a complex query in a metasearch system has to wait for a module to do it thing
- How the business model was going to work: Expensive subscription, venture funding, collateral sales of the interface to law enforcement, advertising, etc..
- Controlling the cost of the pings, pipes, and power for the plumbing
- Spam control.
I know from experience that those dabbling in the search game ignore some of my routine questions. The reasons range from “we are smarter than you” to “our approach just handles these issues.”
Thanks, MSFT Copilot. Recognize anyone in the image you created?
I still struggle with the business model of non-ad supported search and retrieval systems. Subscriptions work. Well, they worked out of the gate for ChatGPT, but how many smart search systems do I want to join? Answer: Zero.
Metasearch systems are simply sucker fish on the shark bodies of a Web search operator. Bing is in the metasearch game because it is a fraction of the Googzilla operation. It is doing what it can to boost its user base. Just look at the wonky Edge ads and the rumored miniscule gain the additional of smart search has delivered to Bing traffic. Poor Yandex is relocating and finds itself in a different world from the cheerful environment of Russia.
Web content indexing is expensive, difficult, and tricky.
But why pick on Kagi? Beats me. Why not write about dogpile.com, ask.com, the duck thing, or startpage.com (formerly ixquick.com)? Each embodies a certain subsonic vibe, right?
Maybe it is the AI flavor of Kagi? Maybe it is the amateur hour approach taken with some functions? Maybe it is just a disconnect between an informed user and an entrepreneurial outfit running a mile a minute with a sign that says, “Subscribe”?
I don’t know, but it is interesting when Web search is essentially a massive disappointment that some bright GenX’er has not figured out a solution.
Stephen E Arnold, April 17, 2024
The National Public Radio Entity Emulates Grandma
April 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I can hear my grandmother telling my cousin Larry. Chew your food. Or… no television for you tonight. The time was 6 30 pm. The date was March 3, 1956. My cousin and I were being “watched” when our parents were at a political rally and banquet. Grandmother was in charge, and my cousin was edging close to being sent to grandfather for a whack with his wooden paddle. Tough love I suppose. I was a good boy. I chewed my food and worked to avoid the Wrath of Ma. I did the time travel thing when I read “NPR Suspends Veteran Editor As It Grapples with His Public Criticism.” I avoid begging for dollars outfits. I had no idea what the issue is or was.
“Gea’t haspoy” which means in grandmother speak: “That’s it. No TV for you tonight. In the morning, both of you are going to help Grandpa mow the yard and rake up the grass.” Thanks, NPR. Oh, sorry, thanks MSFT Copilot. You do the censorship thing too, don’t you?
The write up explains:
NPR has formally punished Uri Berliner, the senior editor who publicly argued a week ago that the network had “lost America’s trust” by approaching news stories with a rigidly progressive mindset.
Oh, I get it. NPR allegedly shapes stories. A “real” journalist does not go along with the program. The progressive leaning outfit ignores the free speech angle. The “real” journalist is punished with five days in a virtual hoosegow. An NPR “real” journalist published an essay critical of NPR and then vented on a podcast.
The article I have cited is an NPR article. I guess self criticism is progressive trait maybe? Any way, the article about the grandma action stated:
In rebuking Berliner, NPR said he had also publicly released proprietary information about audience demographics, which it considers confidential. He said those figures “were essentially marketing material. If they had been really good, they probably would have distributed them and sent them out to the world.”
There is no hint that this “real” journalist shares beliefs believed to be held by Julian Assange or that bold soul Edward Snowden, both of whom have danced with super interesting information.
Several observations:
- NPR’s suspending an employee reminds me of my grandmother punishing us for not following her wacky rules
- NPR is definitely implementing a type of information shaping; if it were not, what’s the big deal about a grousing employee? How many of these does Google have protesting in a year?
- Banning a person who is expressing an opinion strikes me as a tasty blend of X.com and that master motivator Joe Stalin. But that’s just my dinobaby mind have a walk-about.
Net net: What media are not censoring, muddled, and into acting like grandma?
Stephen E Arnold, April 15, 2024
Meta: Innovating via Intentions
April 17, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Analytics India published “Meta Releases AI on WhatsApp, Looks Like Perplexity AI.” The headline caught my attention. I don’t pay much attention to the Zuckbook and the other Meta properties. The Analytics India story made this statement which caught my attention:
What users type in the search bar remains confidential and is not shared with Meta AI unless users intentionally send a query to the Meta AI chatbot.
I am okay with copying from Silicon Valley type outfits. That’s part of the game, which includes colors, shuffling staff, and providing jibber jabber instead of useful interfaces and documentation about policies. But think about the statement: “unless users intentionally send a query to the Meta AI chatbot.” Doesn’t that mean we don’t keep track of queries unless a user sends a query to the Zuckbook’s smart software? I love the “intention” because the user is making a choice between a search function which one of my team told me is not very useful and a “new” search system which will be better. If it is better, then user queries get piped into a smart search system for which the documentation is sparse. What happens to those data? How will those data be monetized? Will the data be shared with those who have a business relationship with Meta?
Thanks, MSFT Copilot. Good enough, but that’s what one might say about MSFT security, right?
So many questions.
The article states:
Users can still search their conversations for specific content without interacting with Meta AI, maintaining the same level of ease and privacy as before. Additionally, personal messages and calls remain end-to-end encrypted, ensuring neither WhatsApp nor Meta can access them, even with the Meta AI integration.
There is no substantiation of this assertion. Indeed, since the testimony of Frances Haugen, I am not certain what Meta does, and I am not willing to accept assertions about what is accessible to the firm’s employees and what is not. What about the metadata? Is that part of the chunk of data Meta cannot access?
Facebook, WhatsApp, and Instagram are interesting services. The information in the Meta services appears to be to be quite useful for a number of endeavors. Academic research groups are less helpful than they could be. Some have found data cut off or filtered. Imitating another AI outfit’s graphic design is the lowest on my list of Meta issues.
The company is profitable. It has considerable impact. The firm has oodles of data. But now a user’s intention gives permission to an interesting outfit to do whatever with that information. Unsettling? Nope, just part of the unregulated world of digital operations which some assert are having a somewhat negative impact on society. Yep, intentionally.
Stephen E Arnold, April 17, 2024
Data Thirst? Guess Who Can Help?
April 17, 2024
As large language models approach the limit of freely available data on the Internet, companies are eyeing sources supposedly protected by copyrights and user agreements. PCMag reports, “Google Let OpenAI Scrape YouTube Data Because Google Was Doing It Too.” It seems Google would rather double down on violations than be hypocritical. Writer Emily Price tells us:
“OpenAI made headlines recently after its CTO couldn’t say definitively whether the company had trained its Sora video generator on YouTube data, but it looks like most of the tech giants—OpenAI, Google, and Meta—have dabbled in potentially unauthorized data scraping, or at least seriously considered it. As the New York Times reports, OpenAI transcribed than a million hours of YouTube videos using its Whisper technology in order to train its GPT-4 AI model. But Google, which owns YouTube, did the same, potentially violating its creators’ copyrights, so it didn’t go after OpenAI. In an interview with Bloomberg this week, YouTube CEO Neal Mohan said the company’s terms of service ‘does not allow for things like transcripts or video bits to be downloaded, and that is a clear violation of our terms of service.’ But when pressed on whether YouTube data was scraped by OpenAI, Mohan was evasive. ‘I have seen reports that it may or may not have been used. I have no information myself,’ he said.”
How silly to think the CEO would have any information. Besides stealing from YouTube content creators, companies are exploring other ways to pierce untapped sources of data. According to the Times article cited above, Meta considered buying Simon & Schuster to unlock all its published works. We are sure authors would have been thrilled. Meta executives also considered scraping any protected data it could find and hoping no one would notice. If caught, we suspect they would consider any fees a small price to pay.
The same article notes Google changed its terms of service so it could train its AI on Google Maps reviews and public Google Docs. See, the company can play by the rules, as long as it remembers to change them first. Preferably, as it did here, over a holiday weekend.
Cynthia Murrell, April 17, 2024
A Less Crazy View of AI: From Kathmandu via Tufts University
April 16, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I try to look for interesting write ups from numerous places. Some in Kentucky (well, not really) and others in farther flung locations like Kathmandu. I read “The boring truth about AI.” The article was not boring in my opinion. The author (Amar Bhidé) presented what seemed like a non-crazy, hyperbole-free discussion of smart software. I am not sure how many people in Greenspring, Kentucky, read the Khatmandu Post, but I am not sure how many people in Greenspring, Kentucky, can read.
Rah rah. Thanks, MSFT Copilot, you have the hands-on expertise to prove that the New York City chatbot is just the best system when it comes to providing information of a legal nature that is dead wrong. Rah rah.
What’s the Tufts University business professor say? Let’s take a look at several statements in the article.
First, I circled this passage:
As economic historian Nathan Rosenberg and many others have shown, transformative technologies do not suddenly appear out of the blue. Instead, meaningful advances require discovering and gradually overcoming many unanticipated problems.
Second, I put a blue check mark next to this segment:
Unlike the Manhattan Project, which proceeded at breakneck speed, AI developers have been at work for more than seven decades, quietly inserting AI into everything from digital cameras and scanners to smartphones, automatic braking and fuel-injection systems in cars, special effects in movies, Google searches, digital communications, and social-media platforms. And, as with other technological advances, AI has long been put to military and criminal uses. Yet AI advances have been gradual and uncertain.
The author references IBM’s outstanding Watson system. I think that’s part of the gradual and uncertain in the hands of Big Blue’s marketing professionals.
Finally, I drew a happy face next to this:
Perhaps LLM chatbots can increase profits by providing cheap if maddening, customer service. Someday, a breakthrough may dramatically increase the technology’s useful scope. For now, though, these oft-mendacious talking horses warrant neither euphoria nor panic about “existential risks to humanity.” Best keep calm and let the traditional decentralised evolution of technology, laws, and regulations carry on.
I would suggest that a more pragmatic and less frenetic approach to smart software makes more sense than the wild and crazy information zapped from podcasts and conference presentations.
Stephen E Arnold, April 16, 2024
Google Cracks Infinity Which Overshadows Quantum Supremacy Maybe?
April 16, 2024
This essay is the work of a dumb dinobaby. No smart software required.
The AI wars are in overdrive. Google’s high school rhetoric is in another dimension. Do you remember quantum supremacy? No, that’s okay, but it makes it clear that the Google is the leader in quantum computing. When will that come to the Pixel mobile device? Now Google’s wizards, infused with the juices of a rampant high school science club member (note the words rampant and member, please. They are intentional.)
An article in Analytics India (now my favorite cheerleading reference tool) uses this headline: “Google Demonstrates Method to Scale Language Model to Infinitely Long Inputs.” Imagine a demonstration of infinity using infinite inputs. I thought the smart software outfits were struggling to obtain enough content to train their models. Now Google’s wizards can handle “infinite” inputs. If one demonstrates infinity, how long will that take? Is one possible answer, “An infinite amount of time.”
Wow.
The write up says:
This modification to the Transformer attention layer supports continual pre-training and fine-tuning, facilitating the natural extension of existing LLMs to process infinitely long contexts.
Even more impressive is the diagram of the “infinite” method. I assure you that it won’t take an infinite amount of time to understand the diagram:
See, infinity may have contributed to Cantor’s mental issues, but the savvy Googlers have sidestepped that problem. Nifty.
But the write up suggests that “infinite” like many Google superlatives has some boundaries; for instance:
The approach scales naturally to handle million-length input sequences and outperforms baselines on long-context language modelling benchmarks and book summarization tasks. The 1B model, fine-tuned on up to 5K sequence length passkey instances, successfully solved the 1M length problem.
Google is trying very hard to match Microsoft’s marketing coup which caused the Google Red Alert. Even high schoolers can be frazzled by flashing lights, urgent management edicts, and the need to be perceived as a leader in something other than online advertising. The science club at Google will keep trying. Next up quantumly infinite. Yeah.
Stephen E Arnold, April 16, 2024
Another Cultural Milestone for Social Media
April 16, 2024
Well this is an interesting report. PsyPost reports, “Researchers Uncover ‘Pornification’ Trend Among Female Streamers on Twitch.” Authored by Kristel Anciones-Anguita and Mirian Checa-Romero, the study was published in the Humanities and Social Sciences Communications journal. The team analyzed clips from 1,920 livestreams on Twitch.tv, a platform with a global daily viewership of 3 million. They found women streamers sexualize their presentations much more often, and more intensely, than the men. Also, the number of sexy streams depends on the category. Not surprisingly, broadcasters in categories like ASMR and “Pools, Hot Tubs & Beaches” are more self-sexualized than, say, gamer girls. Shocking, we know.
The findings are of interest because Twitch broadcasters formulate their own images, as opposed to performers on traditional media. There is a longstanding debate, even among feminists, whether using sex to sell oneself is empowering or oppressive. Or maybe both. Writer Eric W. Dolan notes:
“Studies on traditional media (such as TV and movies) have extensively documented the sexualization of women and its consequences. However, the interactive and user-driven nature of new digital platforms like Twitch.tv presents new dynamics that warrant exploration, especially as they become integral to daily entertainment and social interaction. … This autonomy raises questions about the factors driving self-sexualization, including societal pressures, the pursuit of popularity, and the platform’s economic incentives.”
Or maybe women are making fully informed choices and framing them as victims of outside pressure is condescending. Just a thought. The issue gets more murky when the subjects, or their audiences, are underage. The write-up observes:
“These patterns of self-sexualization also have potential implications for the shaping of audience attitudes towards gender and sexuality. … ‘Our long-term goals for this line of research include deepening our understanding of how online sexualized culture affects adolescent girls and boys and how we can work to create more inclusive and healthy online communities,’ Anciones-Anguita said. ‘This study is just the beginning, and there is much more to explore in terms of the pornification of culture and its psychological impact on users.”
Indeed there is. See the article for more details on what the study considered “sexualization” and what it found.
Cynthia Murrell, April 16, 2024
An Interesting Prediction about Mobile Phones
April 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I have hated telephone calls for decades: Intrusive, phone tag baloney, crappy voice mail systems, and wacko dialing codes when in a country in which taxis are donkeys. No thanks. But the mobile phone revolution is here. Sure, I have a mobile phone. Plus, I have a Chinese job just to monitor data flows. And I have an iPhone which I cart around to LE trade shows to see if a vendor can reveal the bogus data we put on the device.
What’s the future? An implant? Yeah, that sounds like a Singularity thing or a big ear ring, a wire, and a battery pack which can power a pacemaker, an artificial kidney, and an AI processing unit. What about a device that is smart and replaces the metal candy bar, which has not manifested innovations in the last five or six years? I don’t care about a phone which is capable of producing TikToks.
The future of the phone has been revealed in the online publication Phone Arena. “AI Will Kill the Smartphone As We Know It. Here’s Why!” explains:
I know the idea may sound very radical at first glance, but if we look with a cold, objective eye at where the world is going with the software as a service model, it suddenly starts to sound less radical.
The idea is that the candy bar device will become a key fob, a decorative pin (maybe a big decorative pin), a medallion on a thick gold chain (rizz, right?), or maybe a shrinkflation candy bar?
My own sense of the future is skewed because I am a dinobaby. I have a cheapo credit card which is a semi-reliable touch-and-tap gizmo. Why not use a credit card form factor with a small screen (obviously unreadable by a dinobaby but designers don’t care about dinobabies in my experience). With ambient functionality, the card “just connects” and one can air talk and read answers on the unreadable screen. Alternatively, one’s wireless ear buds can handle audio duties.
Net net: The AI function is interesting. However, other technical functions will have to become available. Until then, keep upgrading those mobile phones. No, I won’t answer. No, I won’t click on texts from numbers I don’t have on a white list. No, I won’t read social media baloney. That’s a lot of no’s, isn’t it? Too bad. When you are a dinobaby, you will understand.
Stephen E Arnold, April 15, 2024
Taming AI Requires a Combo of AskJeeves and Watson Methods
April 15, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I spotted a short item called “A Faster, Better Way to Prevent an AI Chatbot from Giving Toxic Responses.” The operative words from my point of view are “faster” and “better.” The write up reports (with a serious tone, of course):
Teams of human testers write prompts aimed at triggering unsafe or toxic text from the model being tested. These prompts are used to teach the chatbot to avoid such responses.
Yep, AskJeeves created rules. As long as the users of the system asked a question for which there was a rule, the helpful servant worked; for example, What’s the weather in San Francisco? However, ask a question for which there was no rule, what happens? The search engine reality falls behind the marketing juice and gets shopped until a less magical version appears as Ask.com. And then there is IBM Watson. That system endeared itself to groups of physicians who were invited to answer IBM “experts’” questions about cancer treatments. I heard when Watson was in full medical-revolution mode that some docs in a certain Manhattan hospital used dirty words to express his view about the Watson method. Rumor or actual factual? I don’t know, but involving humans in making software smart can be fraught with challenges: Managerial and financial to name but two.
The write up says:
Researchers from Improbable AI Lab at MIT and the MIT-IBM Watson AI Lab used machine learning to improve red-teaming. They developed a technique to train a red-team large language model to automatically generate diverse prompts that trigger a wider range of undesirable responses from the chatbot being tested. They do this by teaching the red-team model to be curious when it writes prompts, and to focus on novel prompts that evoke toxic responses from the target model. The technique outperformed human testers and other machine-learning approaches by generating more distinct prompts that elicited increasingly toxic responses. Not only does their method significantly improve the coverage of inputs being tested compared to other automated methods, but it can also draw out toxic responses from a chatbot that had safeguards built into it by human experts.
How much improvement? Does the training stick or does it demonstrate that charming “Bayesian drift” which allows the probabilities to go walk-about, nibble some magic mushrooms, and generate fantastical answers? How long did the process take? Was it iterative? So many questions, and so few answers.
But for this group of AI wizards, the future is curiosity-driven red-teaming. Presumably the smart software will not get lost, suffer heat stroke, and hallucinate. No toxicity, please.
Stephen E Arnold, April 15, 2024

