A Recipe for Thinly Sliced Technology: Is the Pizza Still a Pizza?
July 30, 2020
I don’t want to wax philosophical. Amazon is associated with the concept of the two-pizza team. The idea is that when something crashes, get two people to fix it. Amazon is a two-pizza company, and it works well enough to make Mr. Bezos a star among stars when it comes to cash and risk of government regulation.
The write up “Many Small Teams” seems to be about an idea and a general business practice.
I noted this passage in the article:
Somewhere along the line, we forgot about “reducing communication” just started fixating on assigning independent teams to problem statements that were essentially tiny slices of business problems. As the problem space gets more finely sliced in hopes of achieving scale at each step, so does the number of teams. e.g. What might have been a “Data Delivery Team” charged with delivering fresh data to customers unfortunately becomes “Data ingestion Team”, “Data processing Team” and “Data Release Team” (real world example).
Is the author describing an Amazon vulnerability? When a pizza is sliced into thin pieces, is it still a pizza? What if the approach creates a dog’s breakfast?
Interesting question? The essay points out that an organic process takes place: Small teams grow. Then teams split. What’s the chief indicator of this condition? Perhaps documentation like Amazon’s explanations of its myriad cloud services. Little slices of pizza with more slicing taking place?
The author of the article works at Uber. Pizza delivery I get. But tiny pizza slices for a gig economy delivery outfit? Like I said philosophy.
Stephen E Arnold, July 31, 2020
2020: Reactive, Semi-Proactive, and Missing the Next Big Thing
July 27, 2020
I wanted to wrap up my July 28, 2020, DarkCyber this morning. Producing my one hour pre recorded lecture for the US National Cyber Crime Conference sucked up my time.
But I scanned two quite different write ups AFTER I read “Public Asked To Report Receipt of any Unsolicited Packages of Seeds.” Call me suspicious, but I noted this passage in the news release from the Virginia Department of Agriculture and Consumer Services:
The Virginia Department of Agriculture and Consumer Services (VDACS) has been notified that several Virginia residents have received unsolicited packages containing seeds that appear to have originated from China. The types of seeds in the packages are unknown at this time and may be invasive plant species. The packages were sent by mail and may have Chinese writing on them. Please do not plant these seeds.
And why, pray tell. What’s the big deal with seeds possibly from China, America’s favorite place to sell soy beans? Here’s the key passage:
Invasive species wreak havoc on the environment, displace or destroy native plants and insects and severely damage crops. Taking steps to prevent their introduction is the most effective method of reducing both the risk of invasive species infestations and the cost to control and mitigate those infestations.
Call me suspicious, but the US is struggling with the Rona or what I call WuFlu, is it not? Now seeds. My mind suggested from parts unknown that perhaps, just perhaps, the soy bean buyers are testing another bio-vector.
As the other 49 states realize that they too may want to put some “real” scientists to work examining the freebie seeds, I noted two other articles.
I am less concerned with the intricate arguments, the charts, and the factoids and more about how I view each write up in the context of serious thinking about some individuals’ ability to perceive risk.
The first write up is by a former Andreessen Horowitz partner. The title of the essay is “Regulating Technology.” The article explains that technology is now a big deal, particularly online technology. The starting point is 1994, which is about 20 years after the early RECON initiatives. The key point is that regulators have had plenty of time to come to grips with unregulated digital information flows. (I want to point out that those in Mr. Evans’ circle tossed accelerants into the cyberfires which were containable decades ago.) My point is that current analysis makes what is happening so logical, just a half century too late.
The second write up is about TikTok, the Chinese centric app banned in India and accursed of the phone home tricks popular among the Huawai and Xiaomi crowd. “TikTok, the Facebook competitor?’s” point seems to be that TikTok has bought its way into the American market. The same big tech companies that continue to befuddle analysts and regulators took TikTok’s cash and said, “Come on down.” The TikTok prize may be a stream of free flowing data particularized to tasty demographics. My point is that this is a real time, happening event. There’s nothing like a “certain blindness” to ensure a supercharged online service will smash through data collection barriers.
News flash. The online vulnerabilities (lack of regulation, thumb typing clueless users, and lack of meaningful regulatory action) are the old threat vector.
The new threat vector? Seeds. Bio-attacks. Bio-probes. Bio-ignorance. Big, fancy thoughts are great. Charts are wonderful. Reformed Facebookers’ observations are interesting. But the now problem is the bio thing.
Just missing what in front of their faces maybe? Rona masks and seed packets. Probes or attacks? The motto may be a certain foreign power’s willingness to learn the lessons of action oriented people like Generals Curtis LeMay or George Patton. Add some soy sauce and stir in a cup of Sun Tzu. Yummy. Cheap. Maybe brutally effective?
So pundits and predictive analytics experts, analyze but look for the muted glowing of threat vector beyond the screen of one’s mobile phone.
Stephen E Arnold, July 27, 2020
Disney and Face-Swapping: Real Actors Days Numbered
July 27, 2020
It took big bucks and a lot of time to insert virtual models of Carrie Fisher and Peter Cushing into 2016’s Rogue One: a Star Wars Story, but Disney may soon pull off similar tricks much more easily. The Verge tells us, “Disney’s Deepfakes Are Getting Closer to a Big-Screen Debut.” The studio presented the technology at the recent Eurographics Symposium on Rendering 2020 in London. The article shares Disney’s video illustrating the studio’s latest developments and contrasting them with earlier face-swapping technologies. It is well worth the investment of four minutes for anyone who is at all curious. Reporter James Vincent writes:
“The deepfakes you’ve probably seen to date may look impressive on your phone, but their flaws would be much more apparent on a larger screen. As an example, Disney’s researchers note that the maximum-resolution videos they could create from popular open-source deepfake model DeepFakeLab were just 256 x 256 pixels in size. By comparison, their model can produce video with a 1024 x 1024 resolution — a sizable increase. Apart from this, the functionality of Disney’s deepfake model is fairly conventional: it’s able to swap the appearances of two individuals while maintaining the target’s facial expressions. If you watch the video, though, note how technically constrained the output seems to be. It only produces deepfakes of well-lit individuals looking more or less straight at the camera. Challenging angles and lighting are still not on the agenda for this tech. As the researchers note, though, we are getting closer to creating deepfakes good enough for commercial projects.”
Left unmentioned are such projects’ darker possibilities—false video evidence of a crime, for example, or faked fodder for political scandal. Still, it is fascinating to watch this technology evolve. It is not surprising Disney’s financial motivations have gotten them this far.
Whitney Grace, July 27, 2020
Honeywell: Yep, Our Sweet Quantum Computer Is the Blue Ribbon Winner
July 25, 2020
Who has the world’s fastest quantum computer? Is it IBM, Microsoft, Apple, or Google? No, none of these companies have that claim to fame. According to The Motley Fool that honor belongs to, “Honeywell Unveils The World’s Fastest Quantum Computer.” Quantum computers are still reserved for companies, universities, and governments with deep pockets, but Honeywell’s newest machine is making them one step closer to commercial use.
IBM used to own the fastest quantum computer, but Honeywell’s device has a process with 64 quantum volume. IBM’s machine only has 32 quantum volume capability. The Honeywell quantum computer processes six cubits. A cubit is a quantum computing unit that stores and processes more than ones and zeros. Most computers are still limited to the famous ones and zeros from binary code. Honeywell’s computer also has a 99.997% fidelity score, meaning it can compute simulations and calculations of high quality.
Quantum computers are still in a state similar to the behemoths that dominated basements last century. Ironically, quantum computers are large themselves:
“The Honeywell system is another step forward in a long and difficult process. Scientists expect quantum computers to handle problems that are essentially unsolvable with current technology in fields such as cryptography, weather forecasting, artificial intelligence, and drug development. However, that future lies many years ahead. These are very early days in the development of usable quantum systems.”
Honeywell does not claim to have the best quantum computer, only the fastest. At doing what exactly?
Whitney Grace, July 25, 2020
Intel: Distracted by Horse Ridge, Engineers Take Another Detour
July 24, 2020
My hunch is that you did not read “Intel Introduces Horse Ridge to Enable Commercially Viable Quantum Computers.” You probably don’t care about some of the hurdles quantum computers face with or without Horse Ridge; for example, cooling, programming, and stability. That’s okay. The magic of the “quantum” horse thing was news only a sparse pasture below the ridge can appreciate. I have in my files one snippet from the PR output, however:
Horse Ridge is a highly integrated, mixed-signal SoC that brings the qubit controls into the quantum refrigerator — as close as possible to the qubits themselves. It effectively reduces the complexity of quantum control engineering from hundreds of cables running into and out of a refrigerator to a single, unified package operating near the quantum device.
Yep, the quantum refrigerator.
Now flash forward to “Intel’s 7nm Is Broken, Company Announces Delay Until 2022, 2023.” The write up explains:
Intel CEO Bob Swan said the company had identified a “defect mode” in its 7nm process that caused yield degradation issues. As a result, Intel has invested in “contingency plans,” which Swan later defined as including using third-party foundries.
Perhaps Intel will consider shifting its R&D focus to refrigeration units. Serving the quantum computing sector seems to be a way to pivot from a business in which Amazon Gravitons, AMD chips, and Apple’s custom designed ARM silicon are making headway.
Is Intel’s future horse features. Ooops. I meant Horse Ridge. Is that a glue factory under construction on a site adjacent Intel’s new fabrication facility?
Stephen E Arnold, July 24, 2020
Physics Embraces AI: A Development for One Percent of the One Percenters
July 22, 2020
Say what you like about Newton. Teachers have made gravity “real” to indifferent students with the apple on the noggin metaphor.
Physics teachers today face a different challenge. The “old school” ideas are not going to win promotions, grants, or — even better — a prize. Cash! Fame! The cash thing may work among those who are work from home dads and colleagues without a tenure track ticket.
What can physicists who are the one percent of the one percenters? The answer is to combine esoteric mathematical concepts with the future forward concept “artificial intelligence.”
AI can do physics and “AI in Physics: Are We Facing a Scientific Revolution?” explains this shift. Now between you and me, there are a number of revolutions underway, but the “real life” stuff is of scant interest to physicists in my experience. Einstein anecdotes notwithstanding, physicists are an interesting chunk of the one percent’s one percenters.
Your homework? Verify the over density equation and show each step. No shortcuts! This is forward leaning physics with “real” representations, simulations, and predicted properties. No apple either.
The write up states with significant seriousness that symbolic regression:
can be used to derive mathematical formulas from the internally represented relationships in the network. Symbolic regression is carried out as a genetic algorithm. Equipped with variables and mathematical operators, the algorithm searches for the simplest mathematical formula with which known data can be reproduced.
Like many helpful mathy statements, this statement illuminates the process:
Their result clearly shows that the mixture of data, neural graph networks and symbolic regression is actually suitable for extracting mathematical formulas – in this case an already known natural law – from data with AI.
I enjoy the “clearly.”
But the future is not the stuff one can see, touch, feel, sniff, or think about in substantive ways. The future is tackling Dark Matter with AI.
I learned:
The researchers used the neural grapheme network again. Each node contains information about a dark matter halo such as position, speed and mass and is connected to other halos at a distance of 50 Mpc / h. The network was trained with data from the Quijote Dark Matter Simulation , a collection of generated dark matter structures.
And there is a payoff. Ready?
After the training, the GNN was able to predict the desired property of the halos more accurately than previous models. Using symbolic regression, the researchers were then able to produce a previously unknown mathematical formula that has a lower error rate than the currently most commonly used human-made formula for the same task. The resulting formula was also better able to deal with previously unknown data. For Cranmer, this is a clear sign that the mathematical formula generalizes much better than the neural graph network from which it was derived. This coincides with our previous experience in physics, says Cranmer: “The language of simple symbolic models describes the universe correctly.”
Forget the apple falling on Newton’s slightly addled brain carrier. Think in terms of this metaphor:
If AI is like Columbus, computing power is Santa Maria
And Big Data? Of course, of course. One percent of one percenters know this.
Stephen E Arnold, July 22, 2020
Jargon Alert: Direct from the Video Game Universe
July 22, 2020
I scanned a write up called “Who Will Win the Epic Battle for Online Meeting Hegemony?” The write up was a rah rah for Microsoft because, you know, it’s Microsoft.
Stepping away from the “epic battle,” the write up contained a word from the video game universe. (It’s a fine place: Courteous, diverse, and welcoming.)
The word is “upleveled” and it was used in this way:
Upleveled security and encryption. Remote work sites, especially home offices, have become a prime target for a surge in cybersecurity attacks due to their less hardened and secure nature.
A “level” in a game produced the phrase “level up” to communicate that one moved from loser level 2 to almost normal level 3. That “jump” is known as a “level up.”
Now the phrase has become an adjective as in “leveled up.”
DarkCyber believes that the phrase will be applied in this way:
That AI program upleveled its accuracy.
Oh, and the article: Go Microsoft Teams. It’s an elephant and one knows what elephants do. If you are near an elephant uplevel your rubber boots. Will natural language processing get the drift?
Stephen E Arnold, July 22, 2020
Optical Character Recognition for Less
July 10, 2020
Optical character recognition software was priced high, low, and in between. Sure, the software mostly worked if you like fixing four or five errors per scanned page with 100 words on it. Oh, you use small sized type. That’s eight to 10 errors per scanned page. Good enough I suppose.
You may want to check out EasyOCR, now available via Github. The information page says:
Ready-to-use OCR with 40+ languages supported including Chinese, Japanese, Korean and Thai.
Worth a look.
Stephen E Arnold, July 10, 2020
Interesting Supercomputer Item: Lenovo
July 2, 2020
“Lenovo, Top of the World Chinese Supercomputer Supplier, Sweeps All Markets” contains an interesting statement:
In the Top 500 list for June 2020, China is shown with a home installed base of 228 machines, whereas 20 years ago, in 2000, the country had just two of the top 500 machines installed. In comparison, the US had 258 machines in place 20 years ago, now it has just 117 supercomputers – of which 44, or 38%, are Chinese Lenovo machines. And to further hammer home China’s success, not a single one of the country’s own huge installed base of 228 machines is an American machine – there are no Crays, no IBMs, no Dells. Plenty of American chips, but no American supplier presence.
But wait. Was Lenovo an IBM unit?
The answer is, “Yes until 2005.”
The question is, “What was Lenovo’s management able to do with a unit IBM deemed surplus?”
Answer: Nose into new markets.
Why? Let’s ask Watson.
Stephen E Arnold, July 2, 2020
The Cancel Culture in Technology: A New Approach to Sustained and Informed Discussion
July 1, 2020
DarkCyber sifts through a range of content. Some of it is becoming repetitive. Acquisition of promising start ups like Google’s devouring of a rival maker of smart glasses. The story? Competitive fear, a desire to make hay after burning the field and most of the equipment barn, or an easy way to get some employees not yet prone to management resistance while doing the WFH thing. More details about this deal appear in “Google Completes Acquisition of Ontario Smart-Glasses Maker North.”
Another repetitive theme is turning off, disconnecting, and cancelling. This is not the wonky folks living in SUVs and converted delivery trucks. This dropping out is not the Timothy Leary thing. The new approach to cancelling embraces throwing $450 million into the bonfire nobody cared about: The Microsoft retail stores. And top experts in smart software leaving Twitter because of a New Age “conversation.”
“Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias” brings the culture of open range disputes between sheep herders and cattle ranchers into the zippy 24×7 digital era.
The write up explains in Silicon Valley speak that sheep muddy drinking water and cows do not. Sheep ruin the grazing land. Cattle do not. How is the dispute resolved? As I recall one of my addled teachers explaining, the approach involved poisonings, shootings, fencing, and law enforcement. I am not sure that the problem has been eliminated, but I will generalize that most people do not care about muddy streams and sparse grass.
Today we care about smart software.
The write up points out:
Penn State University Associate Professor Brad Wyble tweeted “This image speaks volumes about the dangers of bias in AI.” LeCun responded, “ML systems are biased when data is biased. This face upsampling system makes everyone look white because the network was pretrained on FlickFaceHQ, which mainly contains white people pics. Train the *exact* same system on a dataset from Senegal, and everyone will look African.“
Definitely contentious.
What interested DarkCyber, however, is not the socio-tech discussion. The message seems to be “I can’t talk to you so I am out of here.”
This is a nice way of hitting the cancel button.
Several observational questions:
- Is this a sheep versus cattle argument?
- How does technology’s refinement processes operate when improvement muddies the drinking water?
- How does dropping out, turning off, and tuning out contribute to innovation?
Cancel means more than not tweeting. Cancelling is officially a trend even for allegedly informed and enlightened techno-herders.
Stephen E Arnold, July 1, 2020