New Research about Telegram and Its Technology
August 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Next week, my team and I will be presenting a couple of lectures to a group of US government cyber experts. Our topic is Telegram, which has been a focal point of my research team for most of 2024. Much of the information we have included in our talks will be new; that is, it presents a view of Telegram which is novel. However, we have available a public version of the material. Most of our work is delivered via video conferencing with PDFs of selected exhibits provided to those participating in a public version of our research.
For the Telegram project, the public lecture includes:
- A block diagram of the Telegram distributed system, including the crypto and social media components
- A timeline of Telegram innovations with important or high-impact innovations identified
- A flow diagram of the Open Network and its principal components
- Likely “next steps” for the distributed operation.
With the first stage of the French judiciary process involving the founder of Telegram completed, our research project has become one of the first operational analyses of what to many people outside of Russia, the Russian Federation, Ukraine, and other countries is unfamiliar. Although usage of Telegram in North America is increasing, the service is off the radar of many people.
In fact, knowledge of Telegram’s basic functions is sketchy. Our research revealed:
- Users lack knowledge of Telegram’s approach to encryption
- The role US companies play in keeping the service online and stable
- The automation features of the system
- The reach of certain Telegram dApps (distributed applications) and YouTube, to cite one example.
The public version of our presentation at the US government professionals will be available in mid-September 2024. If you are interested in this lecture, please, write benkent2020 at yahoo dot com. One of the Beyond Search team will respond to your inquiry with dates and fees, if applicable.
Stephen E Arnold, August 29, 2024
Yelp Google Legal Matter: A Glimpse of What Is to Come
August 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Yelp.com is one of the surviving re-inventions of the Yellow Pages. The online guide includes snapshots of a business, user reviews, and conveniences like classifications of business types. The company has asserted that Google has made the finding services’ life difficult. “Yelp Sues Google in Wake of Landmark Antitrust Ruling on Search” reports:
Yelp has spoken out about what it considers to be Google’s anticompetitive conduct for well over a decade. But the timing of Yelp’s lawsuit, filed just weeks after a Washington federal judge ruled that Google illegally monopolized the search market through exclusive deals, suggests that more companies may be emboldened to take action against the search leader in the coming months.
Thanks, MSFT Copilot. Good enough.
Yelp, like other efforts to build a business in the shadow of Google’s monolith has pointed out that the online advertising giant has acted in a way that inhibited Yelp’s business. In the years prior to Judge Mehta’s ruling that Google was — hang on now, gentle reader — a monopoly, Yelp’s objections went nowhere. However, since Google learned that Judge Mehta decided against Google’s arguments that it was a mom and pop business too, Yelp is making another run at Googzilla.
The write up points out:
In its complaint, Yelp recounts how Google at first sought to move users off its search page and out onto the web as quickly as possible, giving rise to a thriving ecosystem of sites like Yelp that sought to provide the information consumers were seeking. But when Google saw just how lucrative it could be to help users find which plumber to hire or which pizza to order, it decided to enter the market itself, Yelp alleges.
What’s an example of Google’s behavior toward Yelp and presumably other competitors? The write up says:
In its complaint, Yelp recounts how Google at first sought to move users off its search page and out onto the web as quickly as possible, giving rise to a thriving ecosystem of sites like Yelp that sought to provide the information consumers were seeking. But when Google saw just how lucrative it could be to help users find which plumber to hire or which pizza to order, it decided to enter the market itself, Yelp alleges.
The Google has, it appears, used a relatively simple method of surfing on queries for Yelp content. The technique is “self preferencing”; that is, Google just lists its own results above Yelp hits.
Several observations:
- Yelp has acted quickly, using the information in Judge Mehta’s decision as a surfboard
- Other companies will monitor this Yelp Google matter. If Yelp prevails, other companies which perceive themselves as victims of Google’s business tactics may head to court as well
- Google finds itself in a number of similar legal dust ups which add operating friction to the online advertising vendor’s business processes.
Google, like Gulliver, may be pinned down, tied up, and neutralized the way Gulliver was in Lilliput. That was satirical fiction; Yelp is operating in actual life.
Stephen E Arnold, August 29, 2024
Online Sports Gambling: Some Negatives Have Been Identified by Brilliant Researchers
August 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
People love gambling, especially when they’re betting on the results of sports. Online has made sports betting very easy and fun. Unfortunately some people who bet on sports are addicted to the activity. Business Insider reveals the underbelly of online gambling and paints a familiar picture of addiction: “It’s Official: Legalized Sports Betting Is Destroying Young Men’s Financial Futures.” The University of California, Los Angeles shared a working paper about the negative effects of legalized sports gambling:
“…takes a look at what’s happened to consumer financial health in the 38 states that have greenlighted sports betting since the Supreme Court in 2018 struck down a federal law prohibiting it. The findings are, well, rough. The researchers found that the average credit score in states that legalized any form of sports gambling decreased by 0.3% after about four years and that the negative impact was stronger where online sports gambling is allowed, with credit scores dipping in those areas by 1%. They also found an 8% increase in debt-collection amounts and a 28% increase in bankruptcies where online sports betting was given the go-ahead. By their estimation, that translates to about 100,000 extra bankruptcies each year in the states that have legalized sports betting. The number of people who fell dangerously behind on their car loans went up, too. Oddly enough, credit-card delinquencies fell, but the researchers believe that’s because banks wind up lowering credit limits to try to compensate for the rise in risky consumer behavior.”
The researchers discovered that legalized gambling leads to more gambling addictions. They also found if a person lives near a casino or is from a poor region, they’ll more prone to gambling. This isn’t anything new! The paper restates information people have known for centuries about gambling and other addictions: hurts finances, leads to destroyed relationships, job loss, increased in illegal activities, etc.
A good idea is to teach people to restraint. The sports betting Web sites can program limits and even assist their users to manage their money without going bankrupt. It’s better for people to be taught restraint so they can roll the dice one more time.
Stephen E Arnold, August 29, 2024
Google Microtransaction Enabler: Chrome Beefs Up Its Monetization Options
August 29, 2024
This essay is the work of a dumb dinobaby. No smart software required.
For its next trick, Google appears to be channeling rival Amazon. We learn from TechRadar that “Google Is Developing a New Web Monetization Feature for Chrome that Could Really Change the Way We Pay for Things Online.” Will this development distract anyone from the recent monopoly ruling?
Writer Kristina Terech explains how Web Monetization will work for commercial websites:
“In a new support document published on the Google Chrome Platform Status site, Google explains that Web Monetization is a new technology that will enable website owners ‘to receive micro payments from users as they interact with their content.’ Google states its intention is noble, writing that Web Monetization is designed to be a new option for webmasters and publishers to generate revenue in a direct manner that’s not reliant on ads or subscriptions. Google explains that with Web Monetization, users would pay for content while they consume it. It’s also added a new HTML link element for websites to add to their URL address to indicate to the Chrome browser that the website supports Web Monetization. If this is set correctly in the website’s URL, for websites that facilitate users setting up digital wallets on it, when a person visits that website, a new monetization session would be created (for that person) on the site. I’m immediately skeptical about monetizing people’s attention even further than it already is, but Google reassures us that visitors will have control over the whole process, like the choice of sites they want to reward in this way and how much money they want to spend.”
But like so many online “choices,” how many users will pay enough attention to make them? I share Terech’s distaste for attention monetization, but that ship has sailed. The danger here (or advantage, for merchants): Many users will increase their spending by barely noticeable amounts that add up to a hefty chunk in the end. On the other hand, the feature could reduce costly processing charges by eliminating per-payment fees for merchants. Whether end users see those savings, though, depends on whether vendors choose to pass them along.
Cynthia Murrell, August 29, 2024
Can an AI Journalist Be Dragged into Court and Arrested?
August 28, 2024
This essay is the work of a dumb dinobaby. No smart software required.
I read “Being on Camera Is No Longer Sensible: Persecuted Venezuelan Journalists Turn to AI.” The main idea is that a video journalist can present the news, not a “real” human journalist. The write up says:
In daily broadcasts, the AI-created newsreaders have been telling the world about the president’s post-election crackdown on opponents, activists and the media, without putting the reporters behind the stories at risk.
The write up points out:
The need for virtual-reality newscasters is easy to understand given the political chill that has descended on Venezuela since Maduro was first elected in 2013, and has worsened in recent days.
Suppression of information seems to be increasing. With the detainment of Pavel Durov, Russia has expressed concern about this abrogation of free speech. Ukrainian government officials might find this rallying in support of Mr. Durov ironic. In April 2024, Telegram filtered content from Ukraine to Russian citizens.
An AI news presenter sitting in a holding cell. Government authorities want to discuss her approach to “real” news. Thanks, MSFT Copilot. Good enough.
Will AI “presenters” or AI “content” prevent the type of intervention suggested by Venezuelan-type government officials?
Several observations:
- Individual journalists may find that the AI avatar “plays” may not fool or amuse certain government authorities. It is possible that the use of AI and the coverage of the tactic in highly-regarded “real” news services exacerbates the problem. Somewhere, somehow a human is behind the avatar. The obvious question is, “Who is that person?”
- Once the individual journalist behind an avatar has been identified and included in an informal or formal discussion, who or what is next in the AI food chain? Is it an organization associated with “free speech”, an online service, or an organization like a giant high-technology company. What will a government do to explore a chat with these entities?
- Once the organization has been pinpointed, what about the people who wrote the software powering the avatar? What will a government do to interact with these individuals?
Step 1 seems fairly simple. Step 2 may involve some legal back and forth, but the process is not particularly novel. However, Step 3 presents a bit of a conundrum, and it presents some challenges. Lawyers and law enforcement for the country whose “laws” have been broken have to deal with certain protocols. Embracing different techniques can have significant political consequences.
My view is that using AI intermediaries is an interesting use case for smart software. The AI doomsayers invoke smart software taking over. A more practical view of AI is that its use can lead to actions which are at first tempests in tea pots. Then when a cluster of AI tea pots get dumped over, difficult to predict activities can emerge. The Venezuelan government’s response to AI talking heads delivering the “real” news is a precursor and worth monitoring.
Stephen E Arnold, August 28, 2024
Equal Opportunity Insecurity: Microsoft Mac Apps
August 28, 2024
Isn’t it great that Mac users can use Microsoft Office software on their devices these days? Maybe not. Apple Insider warns, “Security Flaws in Microsoft Mac Apps Could Let Attackers Spy on Users.” The vulnerabilities were reported by threat intelligence firm Cisco Talos. Writer Andrew Orr tells us:
“Talos claims to have found eight vulnerabilities in Microsoft apps for macOS, including Word, Outlook, Excel, OneNote, and Teams. These vulnerabilities allow attackers to inject malicious code into the apps, exploiting permissions and entitlements granted by the user. For instance, attackers could access the microphone or camera, record audio or video, and steal sensitive information without the user’s knowledge. The library injection technique inserts malicious code into a legitimate process, allowing the attacker to operate as the compromised app.”
Microsoft has responded with its characteristic good-enough approach to security. We learn:
“Microsoft has acknowledged vulnerabilities found by Cisco Talos but considers them low risk. Some apps, like Microsoft Teams, OneNote, and the Teams helper apps, have been modified to remove the this entitlement, reducing vulnerability. However, other apps, such as Microsoft Word, Excel, Outlook, and PowerPoint, still use this entitlement, making them susceptible to attacks. Microsoft has reportedly ‘declined to fix the issues,’ because of the company’s apps ‘need to allow loading of unsigned libraries to support plugins.’”
Well alright then. Leaving the vulnerability up for Outlook is especially concerning since, as Orr points out, attackers could use it to send phishing or other unauthorized emails. There is only so much users can do in the face of corporate indifference. The write-up advises us to keep up with app updates to ensure we get the latest security patches. That is good general advice, but it only works if appropriate patches are actually issued.
Cynthia Murrell, August 28, 2024
Am I Overly Sensitive to X (Twitter) Images?
August 28, 2024
X AI Creates Disturbing Images
The AI division of X, xAI, has produced a chatbot called Grok. Grok includes an image generator. Unlike ChatGPT and other AIs from major firms, Grok seems to have few guardrails. In fact, according to The Verge, “X’s New AI Image Generator Will Make Anything from Taylor Swift in Lingerie to Kamala Harris with a Gun.” Oh, if one asks Grok directly, it claims to have sensible guardrails and will even list a few. However, writes senior editor Adi Robertson:
“But these probably aren’t real rules, just likely-sounding predictive answers being generated on the fly. Asking multiple times will get you variations with different policies, some of which sound distinctly un-X-ish, like ‘be mindful of cultural sensitivities.’ (We’ve asked xAI if guardrails do exist, but the company hasn’t yet responded to a request for comment.) Grok’s text version will refuse to do things like help you make cocaine, a standard move for chatbots. But image prompts that would be immediately blocked on other services are fine by Grok.”
The article lists some very uncomfortable experimental images Grok has created and even shares a few. See the write-up if curious. We learn one X user found some frightening loopholes. When he told the AI he was working on medical or crime scene analysis, it allowed him to create some truly disturbing images. The write-up shares blurred versions of these. The same researcher says he got Grok to create child pornography (though he wisely does not reveal how). All this without a “Created with AI” watermark added by other major chatbots. Although he is aware of this issue, X owner Elon Musk characterizes this iteration of Grok as an “intermediate step” that allows users “to have some fun.” That is one way to put it. Robertson notes:
“Grok’s looseness is consistent with Musk’s disdain for standard AI and social media safety conventions, but the image generator is arriving at a particularly fraught moment. The European Commission is already investigating X for potential violations of the Digital Safety Act, which governs how very large online platforms moderate content, and it requested information earlier this year from X and other companies about mitigating AI-related risk. … The US has far broader speech protections and a liability shield for online services, and Musk’s ties with conservative figures may earn him some favors politically.”
Perhaps. But US legislators are working on ways to regulate deepfakes that impersonate others, particularly sexually explicit imagery. Combine that with UK regulator Ofcom’s upcoming enforcement of the OSA, and Musk may soon find a permissive Grok to be a lot less fun.
Cynthia Murrell, August 28, 2024
Consulting Tips: How to Guide Group Thinking
August 27, 2024
One of the mysteries of big time consulting is answering the question, “Why do these guys seem so smart?” One trick is to have a little knowledge valise stuffed with thinking and questioning tricks. One example is the Boston Consulting Group dog, star, loser, and uncertain matrix. If you remember the “I like Ike” buttons, you may know that the General used this approach to keep some frisky reports mostly in line during meetings.
Are there other knowledge tools or thinking frameworks? The answer is, “Sure.” When someone asks you to name six, can you deliver a prompt, concise answer? The answer, in my 50 plus years of professional services work, “Not a chance.”
The good news is that you can locate frameworks, get some tips on how to use these to knock the socks off those in a group, and become a walking, talking Blue Chip Consultant without the pain and expense of a fancy university, hours of drudgery, or enduring scathing comments from more experienced peers.
Navigate to “Tools for Better Thinking.” The link, one hopes, displays the names of thinking frameworks in boxes. Click a box, and you get a description and a how-to about the tool.
I think the site is quite good, and it may help some people sell consulting work in certain situations.
Worth a look.
Stephen E Arnold, August 27, 2024
Anthropic AI: New Allegations of Frisky Behavior
August 27, 2024
This essay is the work of a dumb dinobaby. No smart software required.
Who knew high school science club members would mature into such frisky people. But rules are made to be broken. Apologies make the problem go away. Perhaps in high school with some indulgent faculty advisors? In the real world where lawyers are more plentiful than cardinals in Kentucky, apologies may not mean anything. I learned that the highly-regarded AI outfit Anthropic will be spending some time with the firm’s lawyers.
“Anthropic Faces New Class-Action Lawsuit from Book Authors” reported:
AI company Anthropic is still battling a lyrics-focused lawsuit from music publishers, but now it has a separate legal fight on its hands. Authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson are suing the company in a class-action lawsuit in California. As with the music publishers, their focus is on the training of Anthropic’s Claude chatbot.
I anticipate a few of the really smart and oh-so-busy wizards will be sitting in a conference room doing the deposition thing. That involves lawyers who are not particularly as scientifically oriented as AI wizards trying to make sense of Anthropic’s use of OPW (other people’s work) without permission. If you are a fan of legal filings, you can read the 20-page document at this link.
Those AI wizards are clever, aren’t they?
Stephen E Arnold, August 27, 2024
A Tool to Fool AI Detectors
August 27, 2024
Here is one way to fool the automated ai detectors: AIHumanizer. A Redditor in the r/ChatGPT subreddit has created a “New Tool that Removes Frequent AI Phrases like ‘Unleash’ or ‘Elevate’.” Sufficient_Ice_6113 writes:
“I created a simple tool which lets you humanize your texts and remove all the robotic or repeated phrases ChatGPT usually uses like ‘Unleash’ ‘elevate’ etc. here is a longer list of them: Most used AI words 100 most common AI words. To remove them and improve your texts you can use aihumanizer.com which completely rewrites your text to be more like it was written by a human. It also makes it undetectable by AI detectors as a side effect because the texts don’t have the common AI pattern any longer. It is really useful in case you want to use an AI text for any work related things, most people can easily tell an email or application to a job was written by AI when it includes ‘Unleash’ and ‘elevate’ a dozen times.”
The author links to their example result at Undetectable AI, the “AI Detector and Humanizer.” That site declares the sample text “appears human.” See the post’s comments for another example, submitted by benkei_sudo. They opine the tool “is a good start, but needs a lot of improvement” because, though it fools the AI checkers, an actual human would have their suspicions. That could be a problem for emails or press releases that, at least for now, tend to be read by people. But how many actual humans are checking resumes or standardized-test essays these days? Besides, Sufficient Ice emphasizes, AIHumanizer offers an upgraded version for an undisclosed price (though a free trial is available). The AI-content arms race continues.
Cynthia Murrell, August 27, 2024