Worried about TikTok? Do Not Overlook CapCut

March 18, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

I find the excitement about TikTok interesting. The US wants to play the reciprocity card; that is, China disallows US apps so the US can ban TikTok. How influential is TikTok? US elected officials learned first hand that TikTok users can get messages through to what is often a quite unresponsive cluster of elected officials. But let’s leave TikTok aside.

image

Thanks, MSFT Copilot. Good enough.

What do you know about the ByteDance cloud software CapCut? Ah, you have never heard of it. That’s not surprising because it is aimed at those who make videos for TikTok (big surprise) and other video platforms like YouTube.

CapCut has been gaining supporters like the happy-go-lucky people who published “how to” videos about CapCut on YouTube. On TikTok, CapCut short form videos have tallied billions of views. What makes it interesting to me is that it wants to phone home, store content in the “cloud”, and provide high-end tools to handle some tricky video situations like weird backgrounds on AI generated videos.

The product CapCut was named (I believe) JianYing or Viamaker (the story varies by source) which means nothing to me. The Google suggests its meanings could range from hard to paper cut out. I am not sure I buy these suggestions because Chinese is a linguistic slippery fish. Is that a question or a horse? In 2020, the app got a bit of shove into the world outside of the estimable Middle Kingdom.

Why is this important to me? Here are my reasons for creating this short post:

  • Based on my tests of the app, it has some of the same data hoovering functions of TikTok
  • The data of images and information about the users provides another source of potentially high value information to those with access to the information
  • Data from “casual” videos might be quite useful when the person making the video has landed a job in a US national laboratory or in one the high-tech playgrounds in Silicon Valley. Am I suggesting blackmail? Of course not, but a release of certain imagery might be an interesting test of the videographer’s self-esteem.

If you want to know more about CapCut, try these links:

  • Download (ideally to a burner phone or a PC specifically set up to test interesting software) at www.capcut.com
  • Read about the company CapCut in this 2023 Recorded Future write up
  • Learn about CapCut’s privacy issues in this Bloomberg story.

Net net: Clever stuff but who is paying attention. Parents? Regulators? Chinese intelligence operatives?

Stephen E Arnold, March 18, 2024

The NSO Group Back in the News: Is That a Good Thing?

January 24, 2024

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Some outfits struggle to get PR, not the NSO Group. The situation is no “dream.” I spotted this write up in 9 to 5 Mac: “Apple Wins Early Battle against NSO after Suing Spyware Mercenaries for Attacking iPhone Users.” For me, the main point of the article is:

Judge Donato ruled that NSO Group’s request for dismissal in the US in favor of a trial in Israel didn’t meet the bar. Instead, Judge Donato suggested that Apple would face the same challenges in Israel that NSO faces in the US.

image

A senior manager who is an attorney skilled in government processes looks at the desk in his new office. Wow, that looks untidy. Thanks, MSFT Copilot Bing thing. How’s that email security issue coming along? Ah, good enough, you say?

I think this means that the legal spat will be fought in the US of A. Here’s the sentence quoted by 9 to 5 Mac which allegedly appeared in a court document:

NSO has not demonstrated otherwise. NSO also overlooks the fact that the challenges will be amenable to a number of mitigating practices.

The write up includes this passage:

An Apple spokesperson tells 9to5Mac that the company will continue to protect users against 21st century mercenaries like the NSO Group. Litigation against the Pegasus spyware maker is part of a larger effort to protect users…

From my point of view, the techno feudal outfit has surfed on the PR magnetism of the NSO Group. Furthermore, the management team at NSO Group faces what seems to be a bit of a legal hassle. Some may believe that the often ineffective Israeli cyber security technology which failed to signal, thwart, or disrupt the October 2023 dust up requires more intense scrutiny. NSO Group, therefore, is in the spotlight.

More interesting from my vantage point is the question, “How can NSO Group’s lawyering-savvy senior management not demonstrate its case in such a way to, in effect, kill some of the PR magnetism. Take it from me. This is not a “dream” assignment for NSO Group’s legal eagles. I would also be remiss if I did not mention that Apple has quite a bit of spare cash with which to feather the nest of legal eagles. Apple wants to be perceived as the user’s privacy advocate and BFF. When it comes to spending money and rounding up those who love their Apple devices, the estimable Cupertino outfit may be a bit of a challenge, even to attorneys with NSA and DHS experience.

As someone said about publicity, any publicity is good publicity. I am not sure the categorical affirmative is shared by everyone involved with NSO Group. And where is Hulio? He’s down by the school yard. He doesn’t know where he’s going, but Hulio is going the other way. (A tip of the hat to Paul Simon and his 1972 hit.)

Stephen E Arnold, January 24, 2024

23andMe: Those Users and Their Passwords!

December 5, 2023

green-dino_thumb_thumb_thumbThis essay is the work of a dumb dinobaby. No smart software required.

Silicon Valley and health are match fabricated in heaven. Not long ago, I learned about the estimable management of Theranos. Now I find out that “23andMe confirms hackers stole ancestry data on 6.9 million users.” If one follows the logic of some Silicon Valley outfits, the data loss is the fault of the users.

image

“We have the capability to provide the health data and bioinformation from our secure facility. We have designed our approach to emulate the protocols implemented by Jack Benny and his vault in his home in Beverly Hills,” says the enthusiastic marketing professional from a Silicon Valley success story. Thanks, MSFT Copilot. Not exactly Jack Benny, Ed, and the foghorn, but I have learned to live with “good enough.”

According to the peripatetic Lorenzo Franceschi-Bicchierai:

In disclosing the incident in October, 23andMe said the data breach was caused by customers reusing passwords, which allowed hackers to brute-force the victims’ accounts by using publicly known passwords released in other companies’ data breaches.

Users!

What’s more interesting is that 23andMe provided estimates of the number of customers (users) whose data somehow magically flowed from the firm into the hands of bad actors. In fact, the numbers, when added up, totaled almost seven million users, not the original estimate of 14,000 23andMe customers.

I find the leak estimate inflation interesting for three reasons:

  1. Smart people in Silicon Valley appear to struggle with simple concepts like adding and subtracting numbers. This gap in one’s education becomes notable when the discrepancy is off by millions. I think “close enough for horse shoes” is a concept which is wearing out my patience. The difference between 14,000 and almost 17 million is not horse shoe scoring.
  2. The concept of “security” continues to suffer some set backs. “Security,” one may ask?
  3. The intentional dribbling of information reflects another facet of what I call high school science club management methods. The logic in the case of 23andMe in my opinion is, “Maybe no one will notice?”

Net net: Time for some regulation, perhaps? Oh, right, it’s the users’ responsibility.

Stephen E Arnold, December 5, 2023 

The Risks of Smart Software in the Hands of Fullz Actors and Worse

November 7, 2023

green-dino_thumb_thumbThis essay is the work of a dumb humanoid. No smart software required.

The ChatGPT and Sam AI-Man parade is getting more acts. I spotted some thumbs up from Satya Nadella about Sam AI-Man and his technology. The news service Techmeme provided me with dozens of links and enticing headlines about enterprise this and turbo that GPT. Those trumpets and tubas were pumping out the digital version of Funiculì, Funiculà.

I want to highlight one write up and point out an issue with smart software that appears to have been ignored, overlooked, or like the iceberg possibly that sank the RMS Titanic, was a heck of a lot more dangerous than Captain Edward Smith appreciated.

11 7 parade

The crowd is thrilled with the new capabilities of smart software. Imagine automating mundane, mindless work. Over the oom-pah of the band, one can sense the excitement of the Next Big Thing getting Bigger and more Thingier. In the crowd, however, are real or nascent bad actors. They are really happy too. Imagine how easy it will be to automate processes designed to steal personal financial data or other chinks in humans’ armor!

The article is “How OpenAI Is Building a Path Toward AI Agents.” The main idea is that one can type instructions into Sam AI-Man’s GPT “system” and have smart software hook together discrete functions. These functions can then deliver an output requiring the actions of different services.

The write up approaches this announcement or marketing assertion with some prudence. The essay points out that “customer chatbots aren’t a new idea.” I agree. Connecting services has been one of the basic ideas of the use of software. Anyone who has used notched cards to retrieve items related to one another is going to understand the value of automation. And now, if the Sam AI-Man announcements are accurate that capability no longer requires old-fashioned learning the ropes.

The cited write up about building a path asserts:

Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.

Fear, uncertainty, and doubt are staples of advanced technology. And the essay makes clear that the rule maker in chief is Sam AI-Man; to wit the essay says:

After the event, I asked Altman how he was thinking about agents in general. Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch? Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.

Let me introduce my observations about the Sam AI-Man innovations and the type of explanations about the PR and marketing event which has whipped up pundits, poohbahs, and Twitter experts (perhaps I should say X-spurts?)

First, the Sam AI-Man announcements strike me as making orchestration a service easy to use and widely available. Bad things won’t be allowed. But the core idea of what I call “orchestration” is where the parade is marching. I hear the refrain “Some think the world is made for fun and frolic.” But I don’t agree, I don’t agree. Because as advanced tools become widely available, the early adopters are not exclusively those who want to link a calendar to an email to a document about a meeting to talk about a new marketing initiative.

Second, the ability of Sam AI-Man to determine what’s in bounds and out of bounds is different from refereeing a pickleball game. Some of the players will be nation states with an adversarial view of the US of A. Furthermore, there are bad actors who have a knack for linking automated information to online extortion. These folks will be interested in cost cutting and efficiency. More problematic, some of these individuals will be more active in testing how orchestration can facilitate their human trafficking activities or drug sales.

Third, government entities and people like Sam AI-Man are, by definition, now in reactive mode. What I mean is that with the announcement and the chatter about automating the work required to create a snappy online article is not what a bad actor will do. Individuals will see opportunities to create new ways to exploit the cluelessness of employees, senior citizens, and young people. The cheerful announcements and the parade tunes cannot drown out the low frequency rumbles of excitement now rippling through the bad actor grapevines.

Net net: Crime propelled by orchestration is now officially a thing. The “regulations” of smart software, like the professionals who will have to deal with the downstream consequences of automation, are out of date. Am I worried? For me personally, no, I am not worried. For those who have to enforce the laws which govern a social construct? Yep, I have a bit of concern. Certainly more than those who are laughing and enjoying the parade.

Stephen E Arnold, November 7, 2023

Microsoft and What Fizzled with One Trivial Omission. Yep, Inconsequential

October 27, 2023

green-dino_thumbThis essay is the work of a dumb humanoid. No smart software required.

I read “10 Hyped-Up Windows Features That Fizzled Out” is an interesting list. I noticed that the Windows Phone did not make the cut. How important is the mobile phone to online computing and most people’s life? Gee, a mobile phone? What’s that? Let’s see Apple has a phone and it produces some magnetism for the company’s other products and services. And Google has a phone with its super original, hardly weird Android operating system with the pull through for advertising sales. Google does fancy advertising, don’t you think? Then we have the Huawei outfit, which despite political headwinds, keeps tacking and making progress and some money. But Microsoft? Nope, no phone despite the superior thinking which brought Nokia into the Land of Excitement.

10 27 security fire

What do you mean security is a priority? I was working on 3D, the metaverse, and mixed reality. I don’t think anyone on my team knows anything about security. Is someone going to put out that fire? I have to head to an off site meeting. Catch you later,” says the hard working software professional. Thanks MidJourney, you understand dumpster fire, don’t you?

What’s on the list? Here are five items that the online write up identified as “fizzled out” products. Please, navigate to the original “let’s make a list and have lunch delivered” article.

The five items I noted are:

  1. The dual screen revolution Windows 10X for devices like the “Surface Neo.” Who knew?
  2. 3D modeling. Okay, I would have been happy if Microsoft could support plain old printing from its outstanding Windows products.
  3. Mixed reality. Not even the Department of Defense was happy with weird goggles which could make those in the field of battle a target.
  4. Set tabs. Great idea. Now you can buy it from Stardock, the outfit that makes software to kill the weird Window interface. Yep, we use this on our Windows computers. Why? The new interface is a pain, not a “pane.”
  5. My People. I don’t have people. I have a mobile phone and email. Good enough.

What else is missing from this lunch time-brainstorming list generation session?

My nomination is security. The good enough approach is continuing to demonstrate that — bear with me for this statement — good enough is no longer good enough in my opinion.

Stephen E Arnold, October 27, 2023

Newly Emerged Snowden Revelations Appear in Dutch Doctoral Thesis

October 10, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[2]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

One Eddie Snowden (a fine gent indeed) rumor said that 99 percent of the NSA data Edward Snowden risked his neck to expose ten years ago remains unpublished. Some entities that once possessed that archive are on record as having destroyed it. This includes The Intercept, which was originally created specifically to publish its revelations. So where are the elusive Snowden files now? Could they be In the hands of a post-PhD researcher residing in Berlin? Computer Weekly examines three fresh Snowden details that made their way into a doctoral thesis in its article, “New Revelations from the Snowden Archive Surface.” The thesis was written by American citizen Jacob Applebaum, who has since received his PhD from the Eindhoven University of Technology in the Netherlands. Reporter Stefania Maurizi summarizes:

“These revelations go back a decade, but remain of indisputable public interest:

  1. The NSA listed Cavium, an American semiconductor company marketing Central Processing Units (CPUs) – the main processor in a computer which runs the operating system and applications – as a successful example of a ‘SIGINT-enabled’ CPU supplier. Cavium, now owned by Marvell, said it does not implement back doors for any government.
  2. The NSA compromised lawful Russian interception infrastructure, SORM. The NSA archive contains slides showing two Russian officers wearing jackets with a slogan written in Cyrillic: ‘You talk, we listen.’ The NSA and/or GCHQ has also compromised Key European LI [lawful interception] systems.
  3. Among example targets of its mass surveillance program, PRISM, the NSA listed the Tibetan government in exile.”

Of public interest, indeed. See the write-up for more details on each point or, if you enjoy wading through academic papers, the thesis itself [pdf]. So how and when did Applebaum get his hands on information from the Snowden docs? Those details are not revealed, but we do know this much:

“In 2013, Jacob Appelbaum published a remarkable scoop for Der Spiegel, revealing the NSA had spied on Angela Merkel’s mobile phone. This scoop won him the highest journalistic award in Germany, the Nannen Prize (later known as the Stern Award). Nevertheless, his work on the NSA revelations, and his advocacy for Julian Assange and WikiLeaks, as well as other high-profile whistleblowers, has put him in a precarious condition. As a result of this, he has resettled in Berlin, where he has spent the past decade.”

Probably wise. Will most of the Snowden archive remain forever unpublished? Impossible to say, especially since we do not know how many copies remain and in whose hands.

Cynthia Murrell, October 10, 2023

Microsoft Wants to Help Improve Security: What about Its Engineering of Security

August 24, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Microsoft is a an Onion subject when it comes to security. Black hat hackers easily crack any new PC code as soon as it is released. Generative AI adds a new slew of challenges for bad actors but Microsoft has taken preventative measures to protect their new generative AI tools. Wired details how Microsoft has invested in AI security for years, “Microsoft’s AI Red Team Has Already Made The Case For Itself.”

While generative AI aka chatbots aka AI assistants are new for consumers, tech professionals have been developing them for years. While the professionals have experimented with the best ways to use the technology, they have also tested the best way to secure AI.

Microsoft shared that since 2018 it has had a team learning how to attack its AI platforms to discover weaknesses. Known as Microsoft’s AI red team, the group consists of an interdisciplinary team of social engineers, cybersecurity engineers, and machine learning experts. The red team shares its findings with its parent company and the tech industry. Microsoft wants the information known across the tech industry. The team learned that AI security has conceptual differences from typical digital defense so AI security experts need to alter their approach to their work.

“ ‘When we started, the question was, ‘What are you fundamentally going to do that’s different? Why do we need an AI red team?’ says Ram Shankar Siva Kumar, the founder of Microsoft’s AI red team. ‘But if you look at AI red teaming as only traditional red teaming, and if you take only the security mindset, that may not be sufficient. We now have to recognize the responsible AI aspect, which is accountability of AI system failures—so generating offensive content, generating ungrounded content. That is the holy grail of AI red teaming. Not just looking at failures of security but also responsible AI failures.’”

Kumar said it took time to make the distinction and that red team with have a dual mission. The red team’s early work focused on designing traditional security tools. As time passed, the AI read team expanded its work to incorporate machine learning flaws and failures.

The AI red team also concentrates on anticipating where attacks could emerge and developing solutions to counter them. Kumar explains that while the AI red team is part of Microsoft, they work to defend the entire industry.

Whitney Grace, August 24, 2023

Intellectual Property: What Does That Mean, Samsung?

June 19, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I read “Former Samsung Executive Accused of Trying to Copy an Entire Chip Plant in China.” I have no idea if [a] the story is straight and true, [b] a disinformation post aimed at China, [c] something a “real news” type just concocted with the help of a hallucinating chunk of smart software, [d] a story emerging from a lunch meeting with “what if” ideas and “hypotheticals” were flitting from Chinese take out container to take out container.

It does not matter. I find it bold, audacious, and almost believable.

6 12 stealing documents

A single engineer’s pile of schematics, process flow diagrams, and details of third party hardware require to build a Samsung-like outfit. The illustration comes from the fertile zeros and ones at MidJourney.

The write up reports:

Prosecutors in the Suwon District have indicted a former Samsung executive for allegedly stealing semiconductor plant blueprints and technology from the leading chipmaker, BusinessKorea reports. They didn’t name the 65-year-old defendant, who also previously served as vice president of another Korean chipmaker SK Hynix, but claimed he stole the information between 2018 and 2019. The leak reportedly cost Samsung about $230 million.

Why would someone steal information to duplicate a facility which is probably getting long in the tooth? That’s a good question. Why not steal from the departments of several companies which are planning facilities to be constructed in 2025? The write up states:

The defendant allegedly planned to build a semiconductor in Xi’an, China, less than a mile from an existing Samsung plant. He hired 200 employees from SK Hynix and Samsung to obtain their trade secrets while also teaming up with an unnamed Taiwanese electronics manufacturing company that pledged $6.2 billion to build the new semiconductor plant — the partnership fell through. However, the defendant was able to secure about $358 million from Chinese investors, which he used to create prototypes in a Chengdu, China-based plant. The plant was reportedly also built using stolen Samsung information, according to prosecutors.

Three countries identified. The alleged plant would be located in easy-to-reach Xi’an. (Take a look at the nifty entrance to the walled city. Does that look like a trap to you? It did to me.)

My hunch is that there is more to this story. But it does a great job of casting shade on the Middle Kingdom. Does anyone doubt the risk posed by insiders who get frisky? I want to ask Samsung’s human resources professional about that vetting process for new hires and what happens when a dinobaby leaves the company with some wrinkles, gray hair, and information. My hunch is that the answer will be, “Not much.”

Stephen E Arnold, June 19, 2023

Is This for Interns, Contractors, and Others Whom You Trust?

June 14, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Not too far from where my office is located, an esteemed health care institution is in its second month of a slight glitch. The word in Harrod’s Creek is that security methods at use at a major hospital were — how shall I frame this — a bit like the 2022-2023 University of Kentucky’s’ basketball team’s defense. In Harrod’s Creek lingo, this statement would translate to standard English as “them ‘Cats did truly suck.”

6 12 temp worker

A young temporary worker looks at her boss. She says, “Yes, I plugged a USB drive into this computer because I need to move your PowerPoint to a different machine to complete the presentation.” The boss says, “Okay, you can use the desktop in my office. I have to go to a cyber security meeting. See you after lunch. Text me if you need a password to something.” The illustration for this hypothetical conversation emerged from the fountain of innovation known as MidJourney.

The chatter about assorted Federal agencies’ cyber personnel meeting with the institution’s own cyber experts are flitting around. When multiple Federal entities park their unobtrusive and sometimes large black SUVs close to the main entrance, someone is likely to notice.

This short blog post, however, is not about the lame duck cyber security at the health care facility. (I would add an anecdote about an experience I had in 2022. I showed up for a check up at a unit of the health care facility. Upon arriving, I pronounced my date of birth and my name. The professional on duty said, “We have an appointment for your wife and we have her medical records.” Well, that was a trivial administrative error: Wrong patient, confidential information shipped to another facility, and zero idea how that could happen. I made the appointment myself and provided the required information. That’s a great computer systems and super duper security in my book.)

The question at hand, however, is: “How can a profitable, marketing oriented, big time in their mind health care outfit, suffer a catastrophic security breach?”

I shall point you to one possible pathway: Temporary workers, interns, and contractors. I will not mention other types of insiders.

Please, point your browser to Hak5.org and read about the USB Rubber Ducky. With a starting price of $80US, this USB stick has some functions which can accomplish some interesting actions. The marketing collateral explains:

Computers trust humans. Humans use keyboards. Hence the universal spec — HID, or Human Interface Device. A keyboard presents itself as a HID, and in turn it’s inherently trusted as human by the computer. The USB Rubber Ducky — which looks like an innocent flash drive to humans — abuses this trust to deliver powerful payloads, injecting keystrokes at superhuman speeds.

With the USB Rubby Ducky, one can:

  • Install backdoors
  • Covertly exfiltrate documents
  • Capture credential
  • Execute compound actions.

Plus, if there is a USB port, the Rubber Ducky will work.

I mention this device because it may not too difficult for a bad actor to find ways into certain types of super duper cyber secure networks. Plus temporary workers and even interns welcome a coffee in an organization’s cafeteria or a nearby coffee shop. Kick in a donut and a smile and someone may plug the drive in for free!

Stephen E Arnold, June 14, 2023

Google: Responsible and Trustworthy Chrome Extensions with a Dab of Respect the User

June 7, 2023

More Malicious Extensions in Chrome Web Store” documents some Chrome extensions (add ins) which allegedly compromise a user’s computer. Google has been using words like responsible and trust with increasing frequency. With Chrome in use by more than half of those with computing devices, what’s the dividing line between trust and responsibility for Google smart software and stupid but market leading software like Chrome. If a non-Google third party can spot allegedly problematic extensions, why can’t Google? Is part of the answer, “Talk is cheap. Fixing software is expensive”? That’s a good question.

The cited article states:

… we are at 18 malicious extensions with a combined user count of 55 million. The most popular of these extensions are Autoskip for Youtube, Crystal Ad block and Brisk VPN: nine, six and five million users respectively.

The write up crawfishes, stating:

Mind you: just because these extensions monetized by redirecting search pages two years ago, it doesn’t mean that they still limit themselves to it now. There are way more dangerous things one can do with the power to inject arbitrary JavaScript code into each and every website.

My reaction is that why are these allegedly malicious components in the Google “store” in the first place?

I think the answer is obvious: Talk is cheap. Fixing software is expensive. You may disagree, but I hold fast to my opinion.

Stephen E Arnold, June 7, 2023

Next Page »

  • Archives

  • Recent Posts

  • Meta