US Bans Intellexa For Spying On Senator

March 22, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

One of the worst ideas in modern society is to spy on the United States. The idea becomes worse when the target is a US politician. Intellexa is a notorious company that designs software to hack smartphones and transform them into surveillance devices. NBC News reports how Intellexa’s software was recently used in an attempt to hack a US senator: “US Bans Maker Of Spyware That Targeted A Senator’s Phone.”

Intellexa designed the software Predator that once downloaded onto a phone turns it into a surveillance device. Predator can turn on a phone’s camera and microphone, track a user’s location, and download files. The US Treasure Department banned Intellexa from conducting business in the US and US citizens are banned from working with the company. These are the most aggressive sanctions the US has ever taken against a spyware company.

The official ban also targets Intellexa’s founder Tan Dilian, employee Sara Hamou, and four companies that are affiliated with it. Predator is also used by authoritarian governments to spy on journalists, human rights workers, and anyone deemed “suspicious:”

“An Amnesty International investigation found that Predator has been used to target journalists, human rights workers and some high-level political figures, including European Parliament President Roberta Metsola and Taiwan’s outgoing president, Tsai Ing-Wen. The report found that Predator was also deployed against at least two sitting members of Congress, Rep. Michael McCaul, R-Texas, and Sen. John Hoeven, R-N.D.”

John Scott-Railton is a senior spyware researcher at the University of Toronto’s Citizen Lab and he said the US Treasury’s sanctions will rock the spyware world. He added it could also inspire people to change their careers and leave countries.

Predator isn’t the only company that makes spyware. Hackers can also design their own then share it with other bad actors.

Whitney Grace, March 22, 2024

The TikTok Flap: Wings on a Locomotive?

March 20, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I find the TikTok flap interesting. The app was purposeless until someone discovered that pre-teens and those with similar mental architecture would watch short videos on semi-forbidden subjects; for instance, see-through dresses, the thrill of synthetic opioids, updating the Roman vomitorium for a quick exit from parental reality, and the always-compelling self-harm presentations. But TikTok is not just a content juicer; it can provide some useful data in its log files. Cross correlating these data can provide some useful insights into human behavior. Slicing geographically makes it possible to do wonderful things. Apply some filters and a psychological profile can be output from a helpful intelware system. Whether these types of data surfing take place is not important to me. The infrastructure exists and can be used (with or without authorization) by anyone with access to the data.

image

Like bird wings on a steam engine, the ban on TikTok might not fly. Thanks, MSFT Copilot. How is your security revamp coming along?

What’s interesting to me is that the US Congress took action to make some changes in the TikTok business model. My view is that social media services required pre-emptive regulation when they first poked their furry, smiling faces into young users’ immature brains. I gave several talks about the risks of social media online in the 1990s. I even suggested remediating actions at the open source intelligence conferences operated by Major Robert David Steele, a former CIA professional and conference entrepreneur. As I recall, no one paid any attention. I am not sure anyone knew what I was talking about. Intelligence, then, was not into the strange new thing of open source intelligence and weaponized content.

Flash forward to 2024, after the US government geared up to “ban” or “force ByteDance” to divest itself of TikTok, many interesting opinions flooded the poorly maintained and rapidly deteriorating information highway. I want to highlight two of these write ups, their main points, and offer a few observations. (I understand that no one cared 30 years ago, but perhaps a few people will pay attention as I write this on March 16, 2024.)

The first write up is “A TikTok Ban Is a Pointless Political Turd for Democrats.” The language sets the scene for the analysis. I think the main point is:

Banning TikTok, but refusing to pass a useful privacy law or regulate the data broker industry is entirely decorative. The data broker industry routinely collects all manner of sensitive U.S. consumer location, demographic, and behavior data from a massive array of apps, telecom networks, services, vehicles, smart doorbells and devices (many of them *gasp* built in China), then sells access to detailed data profiles to any nitwit with two nickels to rub together, including Chinese, Russian, and Iranian intelligence. Often without securing or encrypting the data. And routinely under the false pretense that this is all ok because the underlying data has been “anonymized” (a completely meaningless term). The harm of this regulation-optional surveillance free-for-all has been obvious for decades, but has been made even more obvious post-Roe. Congress has chosen, time and time again, to ignore all of this.

The second write up is “The TikTok Situation Is a Mess.” This write up eschews the colorful language of the TechDirt essay. Its main point, in my opinion, is:

TikTok clearly has a huge influence over a massive portion of the country, and the company isn’t doing much to actually assure lawmakers that situation isn’t something to worry about.

Thus, the article makes clear its concern about the outstanding individuals serving in a representative government in Washington, DC, the true home of ethical behavior in the United States:

Congress is a bunch of out-of-touch hypocrites.

What do I make of these essays? Let me share my observations:

  1. It is too late to “fix up” the TikTok problem or clean up the DC “mess.” The time to act was decades ago.
  2. Virtual private networks and more sophisticated “get around” technology will be tapped by fifth graders to the short form videos about forbidden subjects can be consumed. How long will it take a savvy fifth grader to “teach” her classmates about a point-and-click VPN? Two or three minutes. Will the hungry minds recall the information? Yep.
  3. The idea that “privacy” has not been regulated in the US is a fascinating point. Who exactly was pro-privacy in the wake of 9/11? Who exactly declined to use Google’s services as information about the firm’s data hoovering surfaced in the early 2000s? I will not provide the answer to this question because Google’s 90 percent plus share of the online search market presents the answer.

Net net: TikTok is one example of a software with a penchant for capturing data and retaining those data in a form which can be processed for nuggets of information. One can point to Alibaba.com, CapCut.com, Temu.com or my old Huawei mobile phone which loved to connect to servers in Singapore until our fiddling with the device killed it dead. Sad smile

Stephen E Arnold, March 20, 2024

Worried about TikTok? Do Not Overlook CapCut

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I find the excitement about TikTok interesting. The US wants to play the reciprocity card; that is, China disallows US apps so the US can ban TikTok. How influential is TikTok? US elected officials learned first hand that TikTok users can get messages through to what is often a quite unresponsive cluster of elected officials. But let’s leave TikTok aside.

image

Thanks, MSFT Copilot. Good enough.

What do you know about the ByteDance cloud software CapCut? Ah, you have never heard of it. That’s not surprising because it is aimed at those who make videos for TikTok (big surprise) and other video platforms like YouTube.

CapCut has been gaining supporters like the happy-go-lucky people who published “how to” videos about CapCut on YouTube. On TikTok, CapCut short form videos have tallied billions of views. What makes it interesting to me is that it wants to phone home, store content in the “cloud”, and provide high-end tools to handle some tricky video situations like weird backgrounds on AI generated videos.

The product CapCut was named (I believe) JianYing or Viamaker (the story varies by source) which means nothing to me. The Google suggests its meanings could range from hard to paper cut out. I am not sure I buy these suggestions because Chinese is a linguistic slippery fish. Is that a question or a horse? In 2020, the app got a bit of shove into the world outside of the estimable Middle Kingdom.

Why is this important to me? Here are my reasons for creating this short post:

  • Based on my tests of the app, it has some of the same data hoovering functions of TikTok
  • The data of images and information about the users provides another source of potentially high value information to those with access to the information
  • Data from “casual” videos might be quite useful when the person making the video has landed a job in a US national laboratory or in one the high-tech playgrounds in Silicon Valley. Am I suggesting blackmail? Of course not, but a release of certain imagery might be an interesting test of the videographer’s self-esteem.

If you want to know more about CapCut, try these links:

  • Download (ideally to a burner phone or a PC specifically set up to test interesting software) at www.capcut.com
  • Read about the company CapCut in this 2023 Recorded Future write up
  • Learn about CapCut’s privacy issues in this Bloomberg story.

Net net: Clever stuff but who is paying attention. Parents? Regulators? Chinese intelligence operatives?

Stephen E Arnold, March 18, 2024

AI to AI Program for March 12, 2024, Now Available

March 12, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Erik Arnold, with some assistance from Stephen E Arnold (the father) has produced another installment of AI to AI: Smart Software for Government Use Cases.” The program presents news and analysis about the use of artificial intelligence (smart software) in government agencies.

image

The ad-free program features Erik S. Arnold, Managing Director of Govwizely, a Washington, DC consulting and engineering services firm. Arnold has extensive experience working on technology projects for the US Congress, the Capitol Police, the Department of Commerce, and the White House. Stephen E Arnold, an adviser to Govwizely, also participates in the program. The current episode explores five topics in an father-and-son exploration of important, yet rarely discussed subjects. These include the analysis of law enforcement body camera video by smart software, the appointment of an AI information czar by the US Department of Justice, copyright issues faced by UK artificial intelligence projects, the role of the US Marines in the Department of Defense’s smart software projects, and the potential use of artificial intelligence in the US Patent Office.

The video is available on YouTube at https://youtu.be/nsKki5P3PkA. The Apple audio podcast is at this link.

Stephen E Arnold, March 12, 2024

Palantir: The UK Wants a Silver Bullet

March 11, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The UK is an interesting nation state. On one hand, one has upmarket, high-class activities taking place not too far from the squatters in Bristol. Fancy lingo, nifty arguments (Here, here!) match up nicely with some wonky computer decisions. The British government seems to have a keen interest in finding silver bullets; that is, solutions which will make problems go away. How did that work for the postal service?

I read “Health Data – It Isn’t Just Palantir or Bust,” written by lawyer, pundit, novelist, and wizard Cory Doctorow. The essay focuses on a tender offer captured by Palantir Technologies. The idea is that the British National Health Service has lots of data. The NHS has done some wild and crazy things to make those exposed to the NHS safer. Sorry, I can’t explain one taxonomy-centric project which went exactly nowhere despite the press releases generated by the vendors, speeches, presentations, and assurances that, by gad, these health data will be managed. Yeah, and Bristol’s nasty areas will be fixed up soon.

image

The British government professional is struggling with software that was described as a single solution. Thanks, MSFT Copilot. How is your security perimeter working today? Oh, that’s too bad. Good enough.

What is interesting about the write up is not the somewhat repetitive retelling of the NHS’ computer challenges. I want to highlight the comments from the lawyer – novelist about the American intelware outfit Palantir Technologies. What do we learn about Palantir?

Here the first quote from the essay:

But handing it all over to companies like Palantir isn’t the only option

The idea that a person munching on fish and chips in Swindon will know about Palantir is effectively zero. But it is clear that “like Palantir” suggests something interesting, maybe fascinating.

Here’s another reference to Palantir:

Even more bizarre are the plans to flog NHS data to foreign military surveillance giants like Palantir, with the promise that anonymization will somehow keep Britons safe from a company that is literally named after an evil, all-seeing magic talisman employed by the principal villain of Lord of the Rings (“Sauron, are we the baddies?”).

The word choice is painting a picture of an American intelware company which does focus on conveying a negative message; for instance, the words safe, evil, all seeing, villain, baddies, etc. What’s going on?

The British Medical Association and the conference of England LMC Representatives have endorsed OpenSAFELY and condemned Palantir. The idea that we must either let Palantir make off with every Briton’s most intimate health secrets or doom millions to suffer and die of preventable illness is a provably false choice.

It seems that the American company is known to the BMA and an NGO have figured out Palantir is a bit of a sticky wicket.

Several observations:

  1. My view is that Palantir promised a silver bullet to solve some of the NHS data challenges. The British government accepted the argument, so full steam ahead. Thus, the problem, I would suggest, is the procurement process
  2. The agenda in the write up is to associate Palantir with some relatively negative concepts. Is this fair? Probably not but it is typical of certain “real” analysts and journalists to mix up complex issues in order to create doubt about vendors of specialized software. These outfits are not perfect, but their products are a response to quite difficult problems.
  3. I think the write up is a mash up of anger about tender offers, the ineptitude of British government computer skills, the use of cross correlation as a symbol of Satan, and a social outrage about the Britain which is versus what some wish it were.

Net net: Will Palantir change because of this negative characterization of its products and services? Nope. Will the NHS change? Are you kidding me, of course not. Will the British government’s quest for silver bullet solutions stop? Let’s tackle this last question this way: “Why not write it in a snail mail letter and drop it in the post?”

Intelware is just so versatile at least in the marketing collateral.

Stephen E Arnold, March 11, 2024

The Internet as a Library and Archive? Ho Ho Ho

March 8, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I know that I find certain Internet-related items a knee slapper. Here’s an example: “Millions of Research Papers at Risk of Disappearing from the Internet.” The number of individuals — young at heart and allegedly-informed seniors — think the “Internet” is a library or better yet an archive like the Library of Congress’ collection of “every” book.

image

A person deleting data with some degree of fierceness. Yep, thanks MSFT Copilot. After three tries, this is the best of the lot for a prompt asking for an illustration of data being deleted from a personal computer. Not even good enough but I like the weird orange coloration.

Here are some basics of how “Internet” services work:

  1. Every year costs go up of storage for old and usually never or rarely accessed data. A bean counter calls a meeting and asks, “Do we need to keep paying for ping, power, and pipes?” Some one points out, “Usage of X percent of the data described as “old” is 0.0003 percent or whatever number the bright young sprout has guess-timated. The decision is, as you might guess, dump the old files and reduce other costs immediately.
  2. Doing “data” or “online” is expensive, and the costs associated with each are very difficult, if not impossible to control. Neither government agencies, non-governmental outfits, the United Nations, a library in Cleveland or the estimable Harvard University have sufficient money to make available or keep at hand information. Thus, stuff disappears.
  3. Well-intentioned outfits like the Internet Archive or Project Gutenberg are in the same accountant ink pot. Not every Web site is indexed and archived comprehensively. Not every book that can be digitized and converted to a format someone thinks will be “forever.” As a result, one has a better chance of discovering new information browsing through donated manuscripts at the Vatican Library than running an online query.
  4. If something unique is online “somewhere,” that item may be unfindable. Hey, what about Duke University’s collection of “old” books from the 17th century? Who knew?
  5. Will a government agency archive digital content in a comprehensive manner? Nope.

The article about “risks of disappearing” is a hoot. Notice this passage:

“Our entire epistemology of science and research relies on the chain of footnotes,” explains author Martin Eve, a researcher in literature, technology and publishing at Birkbeck, University of London. “If you can’t verify what someone else has said at some other point, you’re just trusting to blind faith for artefacts that you can no longer read yourself.”

I like that word “epistemology.” Just one small problem: Trust. Didn’t the president of Stanford University have an opportunity to find his future elsewhere due to some data wonkery? Google wants to earn trust. Other outfits don’t fool around with trust; these folks gather data, exploit it, and resell it. Archiving and making it findable to a researcher or law enforcement? Not without friction, lots and lots of friction. Why verify? Estimates of non-reproducible research range from 15 percent to 40 percent of scientific, technical, and medical peer reviewed content. Trust? Hello, it’s time to wake up.

Many estimate how much new data are generated each year. I would suggest that data falling off the back end of online systems has been an active process. The first time an accountant hears the IT people say, “We can just roll off the old data and hold storage stable” is right up there with avoiding an IRS audit, finding a life partner, and billing an old person for much more than the accounting work is worth.

After 25 years, there is “risk.” Wow.

Stephen E Arnold, March 8, 2024

AI and Warfare: Gaza Allegedly Experiences AI-Enabled Drone Attacks

March 7, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

We have officially crossed a line. DeepNewz reveals: “AI-Enabled Military Tech and Indian-Made Hermes 900 Drones Deployed in Gaza.” It this what they mean by “helpful AI”? We cannot say we are surprised. The extremely brief write-up tells us:

“Reports indicate that Israel has deployed AI-enabled military technology in Gaza, marking the first known combat use of such technology. Additionally, Indian-made Hermes 900 drones, produced in collaboration between Adani‘s company and Elbit Systems, are set to join the Israeli army’s fleet of unmanned aerial vehicles. This development has sparked fears about the implications of autonomous weapons in warfare and the role of Indian manufacturing in the conflict in Gaza. Human rights activists and defense analysts are particularly worried about the potential for increased civilian casualties and the further escalation of the conflict.”

On a minor but poetic note, a disclaimer states the post was written with ChatGPT. Strap in, fellow humans. We are just at the beginning of a long and peculiar ride. How are those assorted government committees doing with their AI policy planning?

Cynthia Murrell, March 7, 2024

The RCMP: Monitoring Sparks Criticism

March 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

The United States and United Kingdom receive bad reps for monitoring their citizens’ Internet usage. Thankfully it is not as bad as China, Russia, and North Korea. The “hat” of the United States is hardly criticized for anything, but even Canada has its foibles. Canada’s Royal Canadian Mounted Police (RCMP) is in water hot enough to melt all its snow says The Madras Tribune: “RCMP Slammed For Private Surveillance Used To Trawl Social Media, ‘Darknet’.”

It’s been known that the RCMP has used private surveillance tools to monitor public facing information and other social media since 2015. The Privacy Commissioner of Canada (OPC) revealed that when the RCMP was collecting information, the police force failed to comply with privacy laws. The RCMP also doesn’t agree with the OPC’s suggestions to make their monitoring activities with third party vendors more transparent. The RCMP also argued that because they were using third party vendors they weren’t required to ensure that information was collected according to Canadian law.

The Mounties’ non-compliance began in 2014 after three police officers were shot. An information monitoring initiative called Project Wideawake started and it involved the software Babel X from Babel Street, a US threat intelligence company. Babel X allowed the RCMP to search social media accounts, including private ones, and information from third party data brokers.

Despite the backlash, the RCMP will continue to use Babel X:

“ ‘Despite the gaps in (the RCMP’s) assessment of compliance with Canadian privacy legislation that our report identifies, the RCMP asserted that it has done enough to review Babel X and will therefore continue to use it,’ the report noted. ‘In our view, the fact that the RCMP chose a subcontracting model to pay for access to services from a range of vendors does not abrogate its responsibility with respect to the services that it receives from each vendor.’”

Canada might be the politest of country in North America, but its government hides a facade dedicated to law enforcement as much as the US.

Whitney Grace, March 5, 2024

Techno Bashing from Thumb Typers. Give It a Rest, Please

March 5, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Every generation says that the latest cultural and technological advancements make people stupider. Novels were trash, the horseless carriage ruined traveling, radio encouraged wanton behavior, and the list continues. Everything changed with the implementation of television aka the boob tube. Too much television does cause cognitive degradation. In layman’s terms, it means the brain goes into passive functioning rather than actively thinking. It would be almost a Zen moment. Addiction is fun for some.

The introduction of videogames, computers, and mobile devices augmented the decline of brain function. The combination of AI-chatbots and screens, however, might prove to be the ultimate dumbing down of humans. APA PsycNet posted a new study by Umberto León-Domínguez called, “Potential Cognitive Risks Of Generative Transformer-Based AI-Chatbots On Higher Order Executive Thinking.”

Psychologists already discovered that spending too much time on a screen (i.e. playing videogames, watching TV or YouTube, browsing social media, etc.) increases the risk of depression and anxiety. When that is paired with AI-chatbots, or programs designed to replicate the human mind, humans rely on the algorithms to think for them.

León-Domínguez wondered if too much AI-chatbot consumption impaired cognitive development. In his abstract he invented some handy new terms that:

“The “neuronal recycling hypothesis” posits that the brain undergoes structural transformation by incorporating new cultural tools into “neural niches,” consequently altering individual cognition. In the case of technological tools, it has been established that they reduce the cognitive demand needed to solve tasks through a process called “cognitive offloading.” Cognitive offloading”perfectly describes younger generations and screen addicts. “Cultural tools into neural niches” also respects how older crowds view new-fangled technology, coupled with how different parts of the brain are affected with technology advancements. The modern human brain works differently from a human brain in the 18th-century or two thousand years ago.

He found:

“The pervasive use of AI chatbots may impair the efficiency of higher cognitive functions, such as problem-solving. Importance: Anticipating AI chatbots’ impact on human cognition enables the development of interventions to counteract potential negative effects. Next Steps: Design and execute experimental studies investigating the positive and negative effects of AI chatbots on human cognition.”

Are we doomed? No. Do we need to find ways to counteract stupidity? Yes. Do we know how it will be done? No.

Isn’t tech fun?

Whitney Grace, March 6, 2024

A Fresh Outcry Against Deepfakes

February 29, 2024

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

There is no surer way in 2024 to incur public wrath than to wrong Taylor Swift. The Daily Mail reports, “’Deepfakes Are a Huge Threat to Society’: More than 400 Experts and Celebrities Sign Open Letter Demanding Tougher Laws Against AI-Generated Videos—Weeks After Taylor Swift Became a Victim.” Will the super duper mega star’s clout spur overdue action? Deepfake porn has been an under-acknowledged problem for years. Naturally, the generative AI boom has made it much worse: Between 2022 and 2023, we learn, such content has increased by more than 400 percent. Of course, the vast majority of victims are women and girls. Celebrities are especially popular targets. Reporter Nikki Main writes:

“‘Deepfakes are a huge threat to human society and are already causing growing harm to individuals, communities, and the functioning of democracy,’ said Andrew Critch, AI Researcher at UC Berkeley in the Department of Electrical Engineering and Computer Science and lead author on the letter. … The letter, titled ‘Disrupting the Deepfake Supply Chain,’ calls for a blanket ban on deepfake technology, and demands that lawmakers fully criminalize deepfake child pornography and establish criminal penalties for anyone who knowingly creates or shares such content. Signees also demanded that software developers and distributors be held accountable for anyone using their audio and visual products to create deepfakes.”

Penalties alone will not curb the problem. Critch and company make this suggestion:

“The letter encouraged media companies, software developers, and device manufacturers to work together to create authentication methods like adding tamper-proof digital seals and cryptographic signature techniques to verify the content is real.”

Sadly, media and tech companies are unlikely to invest in such measures unless adequate penalties are put in place. Will legislators finally act? Answer: They are having meetings. That’s an act.

Cynthia Murrell, March 29, 2024

« Previous PageNext Page »

  • Archives

  • Recent Posts

  • Meta