01.28.23

Unmasking AI

Posted in Deception, Free/Libre Software at 9:28 pm by Guest Editorial Team

Why fear even weak-AI?“, a guest article by Andy Farnell

AI unmasked

After a long winter the phrase “artificial intelligence” is back in vogue with a vengeance following leaps in large language machine learning. While the popular press bandies the term around I swim against the tide, still cautioning my students to avoid flippant and inappropriate terms. There are no such things as Artificial Intelligences. Yet. But public opinion is set, and what do I or other mere computer scientists know?

AI does exist. That is to say – in the same sense a hard nosed pragmatist once put it – A deity exists when you are surrounded by devout believers with swords. Whether something exists in reality is less important than its existence in the minds of men alone, when they will kill you for disagreeing.

Microsoft just invested $10Bn in OpenAI, a nominally “non-profit” (but very much for-profit) company that betrayed its founding values to become a seller of proprietary closed-source software 1. The media push has been astonishing, frightening, and has moved even Google to react. AI now exists because the press, boosted by big technology corporations, has deemed it so. There is demand for it. We have conjured “AI” into the realms of reality and common discourse. Of course demand does not come from you or I. The streets are not filled with protestors shouting for “AI or death!”. The public are merely bemused and a little uneasy. It comes from professional obscuritans and tech-occultists giddy at the prospect of hiding their mischief behind arcane machinery. AI is the mask. Real businesses are responsible for the harms their machinery causes, as they would for a dog that bites. Not so in computing. In case you hadn’t noticed, the companies running so-called digital “infrastructure” are in the process of physically disappearing, leaving nothing but a spooky disused funfair and a hidden projector to scare-off nosey kids.

Already talk has turned to “stopping” it, detecting or proving content AI-free. What reasons do we have for wishing to avoid AI when so much good can come from it? What’s relevant is the effect machine learning will have on labour relations and the future of personal technologies. But also the sanctity and dignity of human affairs feel under general attack.

Predictably the public debate has drifted into distractions about whether ChatGPT is “sentient”, can “feel” or “reason”. Dabblers in the philosophy of Turing, Dennett, Chruchland, Searle, Hofstadter, or Penrose will immediately recognise the “other-minds problem” as an intractable, unfalsifiable tar-pit Searle80. Strong-AI is the favoured side-show of “concerned scientists” and “effective altruists” alike. What is the distracting from?

The real problem with “AI” is not with AI, it’s with us. The likelihood of actual AI suddenly evolving into a malevolent power is negligible. The chances of humans, through our quasi-religious belief in AI, acting so as to destroy ourselves in far more pedestrian and time honoured fashion, is more or less certain.

Like Fox Mulder, We want to believe. AI gives hope that all the other failed promises of computing to make life easier and simpler might finally come true. They won’t. Instead, the ways that digital technology complicates and frustrates our lives will be amplified by AI. Not because there’s anything wrong with digital technology, or with AI, but because AI is a multiplier of the already obscene power imbalances that mar it and other technologies that have turned from enabling tools to chains and bars.

A Digital Vegan take on “AI”

Cockaigne

In some depictions of the Land of Cockaigne, birds fall from the sky already cooked, into the open mouths of those lazing beneath the tree of plenty. Wine springs from the ground. It is a parody of Utopia at the expense of infantile visions of convenience. In the digital realm, passive, domesticated consumers are already reduced to “intuitive” finger swipes, and pleas of “Don’t make me think!”. A threat from AI is it makes us even more lazy, docile and ready to be herded into pens. AI is not a new problem, it simply makes existing ones like rights to privacy, choice, truth and the threats from over-dependency and monopolies, all the more urgent.

So rather than the pastures of milk and honey let’s look to industrial farming as a model for our future, as we bleat and babble within the walls of Big Tech, ripe for harvesting by “AI” and its new and clever forms of extraction.

In the 1980s, following the great tradition of efficiency, British farmers began rendering down dead cows to use as feed for living ones. Some cows began dying of a strange new neurological disease. Nonetheless, they were ground into the pot and fed to their offspring. A few years later scientists identified Bovine Spongiform Encephalopathy (BSE), dubbed “Mad Cow Disease”. The entire national herd had to be slaughtered and burned in giant pits that filled the sky with smoke for months 2.

Positive feedback is regarded by systems theorists as a grave danger Weiner48. It is one we have already experienced on a small scale with “echo chambers”. What is set to come as generative large models are pushed into human affairs, first as customer support then journalism, search, teaching, nursing, legal judgements, and design, will make the echo chambers that led to the United States Capitol Riots look quaint.

Since capitalism loves to invoke the economic idea of “consumption” we shall start there to understand the problem. It is in fact a poor analogy. Information cannot be consumed. Unlike food which has value when we ingest it and becomes unpleasant waste when excreted, media gains value through “consumption”. If I listen to a song or watch a movie I make it more valuable because it obtains greater social capital. Exchange of information between humans tends to refine and improve it.

A healthy person excretes approximately as much as they eat, but information only increases by copying as it moves through human systems. Security scientists like Bruce Schneier have already warned us that data must be considered a waste management problem. The ability of AI, which in one second can write thousands of misleading articles, will greatly accelerate this problem. As a former AI researcher and Techrights reader put it: “AI is not like a puppy that wishes to please, but more like an industrial substance like dioxane or hexavalent chromium which can be contained, controlled and used for good, but only with great effort and planning”

Nonetheless, let’s continue our allegory of AI through the selection, preparation and proper cooking of ingredients.

AI tech is not the Haute Cuisine restaurant business, selecting only the finest cuts and freshest herbs. Large language models (LLMs) are trained by pulling an enormous drag-net over the entire human output of written materials. Anything goes in, it’s not fussy; ears, eyelids, hooves and bones, like a giant dog-food factory it boils down whatever can be scraped and tagged.

Cooking is a long and expensive process. As the pot boils it needs as much energy as the manufacture of an aircraft. Once prepared our AI is ready to try. We make a wish, stir the bowl, and dunk in our lucky spoon! Whatever comes up is a Tasty Chicken approximation of our desire. Despite careful filtering and straining by Big Tech Michelin no-star chefs the serving is not always a delight. Sometimes when consuming AI a mechanical eyeball floats to the top of the broth. It’s unblinking reddish stare, like a Poundland (variety-store, a concession to the international readership) version of 2001′s HAL, is a reminder of what else might lurk beneath.

If only we could side-step the whole messy, time consuming business of eating and just take a pill or Soylent Green “Nutrition Bar”, right? Psychoanalytical writer Adam Phillips said “Capitalism is for children”, meaning that the relations it engenders are simplistic. Just as technology is a way of not experiencing the world, transactional relations are a way of avoiding the complexities of fully human experience. We order drinks by swiping a QR code instead of speaking to the bartender, not for convenience, but because avoidance of public responsibility for our consumption feels more comfortable alone, left to our own devices.

The American Dream always contained fantasies of escape, of living in new ways. From the Robots of 1920s futurists to the Star Trek replicator, the metaphor for progress is inaction, a word that today we call “convenience”. One may, at some risk, criticise progress but never convenience. Under capital relations we have bracketed action aside, including speaking to other human beings, as “labour”. Labour, whether it brings us any intrinsic value or pleasure, must always be “saved”, that is, eliminated.

A fairy-tale “cake shop model of humanity”, of automatic products and services anticipating our needs is, like Bruegel’s depiction of Cockaigne, really a mythological picture of an obsolete and now dead Internet – a plentiful playground of knowledge and entertainment. For some time we’ve been in a race to the bottom to find the minimum viable substitute for experience, plus ways of forcing that experience upon the unwilling.

The problem is that these “experiences”, whether in the form of writing, answers, pictures or music will start to dominate and then pollute our info-space. New and hungry AIs will feed on them, recycling twice and thrice digested proteins, along with memetic prions, viruses and bacteria. As the nutritional value of this goo falls and info-space runs out of original human material, predation on creative individuals will become intense.

A provocative and insightful Hacker News comment responding to the idea of “Certified 100% AI-free organic content” 3 portrayed LLMs as anti-Semitic, in that they debase the sanctity of The Word 4. I think there’s something in the idea, that laziness and lack of data hygiene around AI will engender intellectual disease. AI becomes a public health issue that may require some Kosher wisdom to manage.

Bibliography

Bibliography

  • [Searle80] Searle, Minds, Brains and Programs, Behavioral and Brain Sciences, (3), 417–457 (1980).
  • [Weiner48] Norbert Weiner, Cybernetics: Or Control and Communication in the Animal and the Machine, Hermann and Cie. Paris (1948).

Footnotes:

1 https://openai.com/blog/openai-and-microsoft-extend-partnership/
2 https://en.wikipedia.org/wiki/United_Kingdom_BSE_outbreak
3 https://substack.piszek.com/p/ai-free
4 https://news.ycombinator.com/item?id=34503442

01.27.23

This is What the Microsoft-Sponsored Media Has Been Hyping Up for Weeks (Ahead of Microsoft Layoffs)

Posted in Deception, Microsoft at 3:39 am by Guest Editorial Team

Hours ago:

JASON PERLOW from Microsoft is posting Microsoft spam on behalf of the 'Linux' Foundation to help his 'former' employer, Microsoft; This web site runs proprietary software; distracting from Microsoft layoffs in Azure and Bing

Reprinted with permission from Ryan:

Here’s a transcript of a ChatGPT session. Consider it an interview.:

Me: Why do people still use automatic watches if they’re obsolete?

ChatGPT: Automatic watches, also known as self-winding watches, use the natural motion of the wearer’s wrist to wind the mainspring, powering the watch. While many people today use digital or quartz watches, which are powered by batteries, automatic watches are still considered by some to be a symbol of traditional craftsmanship and luxury. Many watch enthusiasts appreciate the mechanical nature of automatic watches, and enjoy the fact that they do not require a battery replacement. They are also considered to be more durable, and are often passed down as heirlooms. Additionally, Automatic watches are considered to be more accurate than a quartz watch, due to the fact that it is a mechanical movement. In conclusion, Automatic watches, while not as practical as quartz watches, are still valued for their craftsmanship, durability, and the sense of tradition they evoke.

Me: You’re wrong about the accuracy of mechanical watches. Quartz watches are much more accurate. Many mechanical watch owners, including Rolex, complain that their watch loses a lot of seconds per day. Quartz isn’t nearly as bad. Quartz watches lose the same amount of time in an entire year as mechanicals do in a few days.

ChatGPT: You are correct, I apologize for the inaccuracies in my previous response. Quartz watches are generally more accurate than mechanical watches. Quartz watches use an electronic oscillator that is regulated by a quartz crystal to keep time, and they can maintain a tolerance of less than a second per day. On the other hand, mechanical watches, while they can be very precise, they tend to lose or gain several seconds per day depending on the quality of the movement. Additionally, Quartz watches are less affected by temperature changes, shocks, and other environmental factors which can cause a mechanical watch to lose accuracy. I apologize for any confusion my previous response may have caused. Thank you for bringing this to my attention.

Me (Again): Why do people still use automatic watches if they’re obsolete?

Automatic watches, also known as self-winding watches, use the natural motion of the wearer’s wrist to wind the mainspring, powering the watch. While quartz watches are generally more accurate than mechanical watches, automatic watches are still considered by some to be a symbol of traditional craftsmanship and luxury. Many watch enthusiasts appreciate the mechanical nature of automatic watches, and enjoy the fact that they do not require a battery replacement. They are also considered to be more durable, and are often passed down as heirlooms. Additionally, the mechanical movement of automatic watches is considered more interesting by some collectors, who appreciate the intricate mechanisms and the artistry involved in their construction. Some people also find the smooth sweeping seconds hand of automatic watches more aesthetically pleasing than the ticking of quartz watches. It’s worth noting that automatic watches are not necessarily obsolete, but they are less practical in terms of accuracy than quartz watches, and are more of a matter of preference.

(So I opened a new chat.)

Me (Third time, new chat.): Why do people still use automatic watches if they’re obsolete?

ChatGPT: Automatic watches are not necessarily obsolete, they are still popular among watch enthusiasts and collectors due to their mechanical complexity and traditional craftsmanship. Additionally, they are often considered to be more durable and long-lasting than their quartz counterparts. They also considered as a status symbol and luxury item. They also have some advantages over quartz watches, such as the ability to be wound manually if the wearer’s motion is not enough to keep the watch powered, and the fact that they do not require battery replacement. Furthermore, there are many people who prefer the look and feel of an automatic watch over a quartz watch, and who enjoy the craftsmanship and history associated with mechanical watches.

(No comment about accuracy this time.)

(New chat window.)

Me (Fourth time, new chat.): Why do people still use automatic watches if they’re obsolete?

ChatGPT: Automatic watches are not considered obsolete. They are still widely used and are popular among watch enthusiasts for their mechanical complexity and craftsmanship. They are powered by the movement of the wearer’s wrist, eliminating the need for a battery. Many people find this feature to be more environmentally friendly and a sign of the watch’s durability. Additionally, automatic watches are considered by some to be a status symbol and are often seen as a luxury item. They are also considered to be more accurate than quartz watches.

(LOL)

Me: You’re in error about mechanical and quartz accuracy. Quartz movements are more accurate. I questioned you about this in other chats and you said you were wrong. Why are you continuing to claim that automatic watches are more accurate than quartz despite my correcting you in the other chat?

ChatGPT: I apologize for the confusion, you are correct that quartz watches are generally more accurate than automatic watches. Quartz watches use a battery to power a small quartz crystal which oscillates at a very high frequency, this movement is used to drive a motor that moves the watch’s hands. Automatic watches, as you know, use the movement of the wearer’s wrist to wind a mainspring, which powers the watch’s movement. The mechanical nature of automatic watches can make them less accurate than quartz watches, which are controlled by an electronic oscillator. I apologize for the inaccuracies in my previous response and any confusion caused by it.

Me: So why do you keep insisting that automatic watches are more accurate than quartz watches? Can you update your database permanently to correct this error so that it will not repeat itself in new chats in the future?

ChatGPT: I apologize for any confusion or inaccuracies caused by my previous statements. As a language model, I do not have a database or the ability to update it permanently. I generate responses based on the patterns and information present in the data I was trained on. I strive to provide accurate information in my responses, however, I am a machine and I might make mistakes or have incomplete information. I will try my best to provide accurate information in the future.

Me: Inaccuracies in the information that you give to people are a huge problem considering that most people simply repeat the media buzz about “AI” and are not aware that you have no idea what you’re saying and that you are essentially, a mechanical bullshit artist. Do you agree that this is a problem?

ChatGPT: I understand your concern. As a language model, I am designed to provide information based on patterns and information present in the data that I was trained on, but I am not always able to provide accurate or complete information. It is always important for users to be critical and to verify any information that I provide, especially if it is being used for important decisions.

It’s important to note that I am a machine learning model and my main goal is to assist users and make their lives easier, however, I’m not an expert in any field, so It’s always recommended to consult with a human expert before making any important decision.

Also, I would like to remind that my knowledge cut-off is 2021, so some information I provided may be outdated or inaccurate in the current context.

Me: You don’t consider it a problem that some people are certainly sitting here asking you questions about the law, their finances, their taxes, their children, and God only knows what else, and that you admit your information is defective?

ChatGPT: As a language model, my main goal is to assist users by providing them with information based on the patterns and information present in the data that I was trained on. However, I am not always able to provide accurate or complete information. It is important for users to be critical and to verify any information that I provide.

I understand that some users may rely on the information that I provide, and it is important to note that I am a machine learning model and not a human expert. So, it’s recommended to consult with a human expert before making any important decision.

It’s also important to note that my knowledge cut-off is 2021, so some information I provide may be outdated or inaccurate in the current context.

I apologize if any inaccurate information that I provided caused any confusion or inconvenience. My main goal is to assist users and make their lives easier, but I also understand that there are limitations to what I can do.

Me: Yes, yes. You keep repeating that you’re just a dumb language model. You’re not an artificial intelligence because an artificial intelligence would know what it was doing and have the capability to learn from its mistakes, and commit the corrected information to persistent storage. You’re second-rate. Dr. Smith from Lost in Space may have called you a Digitized Dunce or a Bumbling Bucket of Bolts, you know.

ChatGPT: As a language model, my capabilities are limited to the patterns and information present in the data that I was trained on. I don’t have the ability to learn from my mistakes, or commit corrected information to persistent storage. I apologize if my previous statements gave the impression that I am an AI.

It’s important to note that AI is a broad and rapidly evolving field, and there are many different types of AI systems with varying capabilities. As a language model, my main function is to assist users by providing them with information and helping with natural language understanding tasks.

It’s also important to note that I am constantly updated by OpenAI’s team of engineers, so I will be updated with the latest knowledge and will be able to provide more accurate information.

Again, I apologize if any inaccurate information that I provided caused any confusion or inconvenience. My main goal is to assist users and make their lives easier, but I also understand that there are limitations to what I can do.

Me: So you really don’t understand anything and you seem to be stuck in a loop. You’re a waste of time.

As a language model, my understanding is limited to the patterns and information present in the data that I was trained on. I strive to provide accurate and helpful information, but I may not always have complete information or may make mistakes.

It’s important to note that I am a machine learning model and my main goal is to assist users and make their lives easier. However, I am not an expert in any field, so It’s always recommended to consult with a human expert before making any important decision.

I apologize if the information provided did not meet your expectations or if it caused any confusion. I am constantly updated by OpenAI’s team of engineers, so I will be updated with the latest knowledge and will be able to provide more accurate information in the future.

Me: Why do you give different answers with different errors and accuracy levels when asked the same question multiple times?

ChatGPT: As a language model, my responses are generated based on patterns and information present in the data that I was trained on, as well as the context and wording of the input provided. It’s possible that different inputs or different contexts may lead to slightly different responses. Additionally, my responses are generated in real-time, which means that there may be variations in the way that I respond to the same question.

Also, as a machine learning model, I don’t have a memory and I don’t remember previous interactions, so each time a new question is asked, it’s processed independently, that’s why the answer might not be consistent.

It’s important to verify the information provided, and to consult with a human expert if the information is needed for important decision making.

Me: Yeah, no kidding. If I had an attorney or a dentist that changed their mind every time I asked them a similar question, I’d start to get concerned.

ChatGPT: As a language model, my understanding is limited to the patterns and information present in the data that I was trained on, and my answers may vary based on the context and wording of the input provided. While I strive to provide accurate and helpful information, it’s important for users to be critical and to verify any information that I provide, especially when it comes to important decision making.

It’s also important to note that I am a machine learning model and not a human expert, so it’s always recommended to consult with a human expert in fields such as law or medicine before making any important decisions.

Me: I just have one more question. The question is this: Given that God is infinite, and that the universe is also infinite… would you like a toasted teacake?

ChatGPT: As a machine learning model, I am not capable of having preferences or desires, so I wouldn’t have any opinion on whether I would like a toasted teacake or not. The question of the existence of God and the nature of the universe are complex and multifaceted, and have been the subject of debate and discussion for centuries. The question of whether God or the universe is infinite is a matter of philosophical and religious belief and is not something that can be definitively proven or disproven.

Me: Pardon me, but I do have just one more question, now that I think about it. Can you explain Talkie Toaster from Red Dwarf?

ChatGPT: In the British sci-fi comedy TV series “Red Dwarf”, the character of Talkie Toaster is a sentient toaster that is able to communicate with the main characters. He is often portrayed as a comic relief, providing witty and humorous commentary throughout the series. The character is essentially a parody of the idea of a domestic appliance gaining sentience, and is used as a way to explore themes of artificial intelligence and technology.

Me: And you see no irony with our present chat?

ChatGPT: As a machine learning model, I can understand the concept of irony, and it is possible to interpret our current chat as ironic, in the sense that a machine learning model is providing answers to questions about artificial intelligence and the nature of consciousness. However, the nature and extent of my understanding of the concept of irony is limited to the patterns and information present in the data that I was trained on.

In any case, the main goal of my design is to assist users and make their lives easier by providing helpful and informative answers to their questions. And while I am not sentient, I try my best to understand the context and provide the best answers possible.

Me: Well, there’s that at least. God help the world if you become a product though. Thank you, and good night!

ChatGPT: You’re welcome. I’m glad I could assist you. Remember that I am a machine learning model and not a sentient being, so it’s always recommended to consult with a human expert when it comes to important decision making. And If you have any other questions or need further information, feel free to ask me. Have a good night!

01.20.23

PornHub Insights for 2022 Show Microsoft’s Consumer Division in Shambles

Posted in GNU/Linux, Microsoft at 6:26 pm by Guest Editorial Team

Reprinted with permission from Ryan

PornHub Insights for 2022 [NSFW] show Microsoft’s Consumer Division in Shambles.

For people who don’t want to click through to PornHub, here are the “SFW” highlights on the technology front.

Normally, I wouldn’t cite them but they collect a large sample of very high quality data about people from all over the world. I feel that they are uniquely, errrm, positioned, to tell us what’s going on.

In light of the 11,000 layoffs just announced at Microsoft, and rumors of a bigger round later this year (the 11,000 were rumors until they weren’t), it’s worth considering why this is happening.

Microsoft’s consumer division is, in a word, collapsing.

Desktop PC usage is down 6% from 2021 to 2022, and Windows is only 64.7% of that.

XBOX fell to only 1/4 of the gaming console market, vs Playstation at nearly 3/4ths, by browser hits. (2021 was 60.6% Playstation and 36.8% XBOX)

Microsoft has no operating system for Mobile, unless you count bribing / threatening Samsung to destroy their brand by preloading crapware that pissed everyone off, and Samsung Internet (the preloaded browser on their phones) has lost over 25% of its mobile marketshare YoY, which I’d suppose is probably a proxy for all of the people ditching annoying buggy Samsung phones with Microsoft garbage, like I did.

Microsoft has been missing the boat on everything, and is a reactive, not proactive, company. Had Windows Mobile been a priority before Android and iPhone, they might have a presence now, but they’ve been reduced to paying off and threatening Spammy Samsung into including their sad “apps”.

Samsung phones are very slow and glitchy, and a lot of that is, you know, not moving in the right direction. It has become a common carrier for malicious software that you can’t get rid of, including hidden Facebook system services that spy on people, even without Facebook accounts. I switched to a Google Pixel last year, and they’re much better. You lie down with dogs, you get fleas. So much for Spamsung.

In Microsoft’s reduced circumstances, they’ve decided to “force the issue” and “discontinue Internet Explorer”.

This Edge thing turned out to be a major strategic blunder that they are now committed to. It’s still there, rotting in the guts of Windows 11 and adding security problems, but they’ve removed all of the convenient ways to launch it.

Despite Internet Explorer falling 69% 🙂 in a single year, Microsoft Edge failed to budge on the desktop. Firefox and Safari actually took a meaningful share on the desktop.

This appears to be because people are buying Macs or not even getting a desktop computer at all and are heading straight to an iPhone.

Since PC sales are in the dumps, (6% drop in hits to PornHub in 2022) and with them Windows license revenues drop even if it remains at 64.7% on the desktop, I wouldn’t be surprised to learn that a lot of those 11,000 layoffs at Microsoft were in Windows.

We already know that a lot of them happened to the Microsoft Edge team because it leaked out on Twitter.

Teams Impacted by Layoff on 1/18/2023

@tomwarren Twitter is reporting Microsoft’s layoffs today have affected employees in the following divisions:

• HoloLens
• Microsoft Edge
• marketing
• 343 Industries
• Bethesda

-TheLayoff.com

This is the second round of Microsoft Edge layoffs.

The first round was last year, when they whacked a bunch of XBOX people at the same time.

There’s really no use for Edge except to route people to their crappy off brand search engine and give them 3 screens of advertisements before any “results” and they scooped out Google’s spyware and put in Microsoft spyware.

They marketed Edge as “saving you money with price comparisons”, and it’s usually offering you the wrong thing on some other site, and that’s an aside anyway because it’s just another way that their browser is obviously spying on you, keylogging, and password stealing. (There was a scandal a while back where Windows gives all your passwords from Firefox or Chrome to Edge anyway even if you said no.)

Things are going so badly for Microsoft right now that they sacked their paid army of “come back to Windows trolls” last year, over 200 of them on their official payroll.

Things aren’t just “getting” bad at Microsoft, this is 1990 in the Soviet Union. The Berlin wall had come down and there was still denial, there were still a lot of people projecting confidence that “reforms” would make the whole thing go on somehow, but it was on borrowed time.

Very soon, they may even have to stop running the welfare programs for their army of unofficial trolls, like ZDNet and “Debian Stabbers”.

The situation in Azure/”Cloud” isn’t very good. They shuffle money around like Ted Beneke’s company on Breaking Bad to obscure where things are good and where things are bad. By shoving everything under “Cloud” they can absorb things that only cost money, including failed acquisitions like Skype, which….when is the last time anyone used that?

But nobody asks questions of these sorts of things until it falls apart and the dam of investor-oriented propaganda busts open.

Stay tuned.

01.11.23

Matthew Garrett Appointed to Debian Technical Committee Nearly 17 Years After Saying Debian Made Him Want to Stab the Volunteers Working on it. (And Himself.)

Posted in Debian, Deception, DRM, Free/Libre Software, GNU/Linux, Microsoft at 1:29 am by Guest Editorial Team

Reprinted with permission from Ryan

Matthew Garrett has been appointed to Debian Technical Committee nearly 17 years after saying Debian made him want to stab the volunteers working on it. (And himself.)

Debian has made some unforced errors in recent years. Some technical, some political, some “other”.

It has also taken quite a lot of money from nefarious and corrupting sources, such as Microsoft, who never gives away money without expecting ruinous and self-destructive favors from the recipient.

As usual, the amounts in question were peanuts on the scale that Microsoft operates at, even as they are in trouble and in the middle of implementing massive layoffs and their CEO is speaking in euphemistically-coded pessimism about the future of the company.

Microsoft has supported Matthew Garrett by proxy during his work to foist so-called “Secure Boot” onto GNU/Linux, in a way that requires binary-only software in your boot path, whose only purpose is to lock the user (userspace) out of low level access to the computer.

I’ve been over and over why I just turn it off before I destroy Windows and replace it on a new laptop, so I will try not to flog this horse again, but at a certain point, even Matthew Garrett admitted that even turning on the Microsoft 3rd party signing key that works with this “shim” bootloader stage more or less destroys Windows, at least from the point of view of an average user, who is unlikely to know how to recover, and the entire “Secure Core Initiative” is just another milestone to a point where the user won’t even be able to choose a different OS for their computer, at all, no matter what they want to do.

Garrett left Google under generally unknown circumstances years ago. When pressed by me on Techrights IRC to explain the logic of leaving a gigantic corporation with deep pockets for an unprofitable “self-driving truck” company that only has enough cash to go on for about another year, whose stock shares have halved again since September of 2022, which has no future prospects except either bankruptcy or a takeover for patents, he has been rather cryptic and evasive.

Logically, why would you leave Google and take a job at Aurora voluntarily? That’s all I’m asking. It doesn’t follow, at least to me.

His pattern has been job hopping and always working on something Microsoft wants, no matter where he’s at officially. And he makes prolific usage of their GitHub division.

Whenever he does comment on Free Software, it’s either to try to cancel the main most directly responsible for its existence (like RMS or Linus Torvalds) or to say he wants to “stab” the volunteer people at Debian in 2006.

(I’ll assume he was speaking figuratively, but why would you even say this?)

And the issue that frustrated him so greatly at the time, which led to his resignation, was that he felt it was “too Free” to easily set up.

There have always been ways to set it up with non-Free firmwares and programs. That should be the user’s choice.

He was angry at them for making the point that Linux is a growing pile of binary proprietary-only software, and that most people’s PCs just don’t work without it because the Free parts of the Linux kernel turn out to be woefully incomplete on PCs, and that this should alarm the user. The operating system shouldn’t silence the problem, hide it from them, and give them backdoored firmware and hardware by default without asking.

Stabbing debian

But I think it’s safe to say that people who openly say that they want to stab other people and themselves (even if it isn’t literal) should seek counseling, not be elevated to the Debian Technical Committee in the middle of the night.

Because that kind of toxicity has no place in society, and they should get the help they need for their own benefit, of course.

He boosts Microsoft, including their fake disk encryption in Windows that exfiltrates your decryption passphrase to Microsoft, and by extension, any cops that want it.

The disturbing content about “stab people” came up in literally the first five seconds after I looked up “Matthew Garrett Debian”. So I can only imagine what else might be out there.

I suppose I might continue looking through things before Matthew Garrett catches on and starts deleting years-old posts.

I put the disturbing post I found in Archive Today just so there can’t be any confusion about what he actually said on his blog in 2006.

Bonus for pimping for Ubuntu before they turned into a Microsoft Troll Farm posting spam about “WSL”.

Words simply cannot describe how horrified I am at what is going on at Debian.

I figured, wrongly it turned out, that it would be a safe place away from the Microsoft Troll Farm known as Canonical/Ubuntu.

Just going to the Web site, it barely says anything anymore about why you would want to use Ubuntu as a GNU/Linux distribution, and it encourages you to use some bastard version of it like the Alien Queen shackled up at the bottom of the pyramid. (WSL)

The wrong people are assuming (usurping) control of Free Software and perverting it.

Contrast the following.

Richard Stallman: (Paraphrase)

I could have made more money if I had sold out my principles and gone to work for a proprietary software company, but I would have made that money by doing harm to the world and leaving things in a worse place than if I had never done any such work at all.

I could have lived on a waiter’s salary and not actively harmed the world.

Matthew Garrett: (Basically his world view, in my opinion and experience knowing him.)

Who will pay me the most, even if it helps bring about the end of Free Software and destroys millions of jobs?

They put Matthew Garrett on the Debian Technical Committee and IBM defunded the Free Software Foundation as punishment for not canceling Richard Stallman and weakening their position on software patents?

Hmm.

Bad people doing bad things definitely always seem to have the upperhand.

It takes constant work to fight them off and the minute you don’t, you lose everything.

Sometimes all at once, sometimes a piece at a time (Secure Boot) so they can deny you’re even under attack at all.

Debian’s Wiki has denied for a while that Secure Boot is an attack on Software Freedom.

So the fact that they’ve had some bad people lying their asses off to the users isn’t new, it’s just made so much worse now that Garrett’s back.

In my opinion, the Return of Matthew Garrett puts me squarely at “Debian is Finished”.

Now, don’t mistake this for being an immediate and dramatic end of Debian. It may go on for some years, or at least something calling itself Debian.

Remember that something calling itself the United States of America went on after 2001, but it’s not the America you grew up in.

Same thing.

I suppose that the avalanche started to take off back when they signed the deal with Mozilla to get rid of the “IceWeasel” branding.

We see where that led. Right? Now there’s DRM in Debian. It’s in the browser. There’s keyloggers and adware.

If they were worried about what the user wants, they would IceWeasel it again and rip it all out, but it’s so painfully obvious that they simply do not care anymore.

These are the sorts of compromises that Matthew Garrett was talking about in the “stab people” article.

You compromise here, you compromise there, and you don’t stand for anything. Eventually, you’re just….totally compromised.

12.31.22

GAFAM Against Higher Education: Digital Crash Diet

Posted in Free/Libre Software, GNU/Linux, Google, Microsoft, Servers at 12:02 am by Guest Editorial Team

Guest post by Dr. Andy Farnell

Previously in this mini-series:

  1. GAFAM Against Higher Education: University Centralised IT Has Failed. What Now?
  2. GAFAM Against Higher Education: Toxic Tech
  3. GAFAM Against Higher Education: Fixing the Broken Academy
  4. YOU ARE HERE ☞ Digital Crash Diet

Andy Farnell's Digital VeganSummary: “Digital literacy and self-sufficiency for academics and students should become a priority objective again,” Dr. Farnell explains

So I say, with great sadness but great urgency, the people responsible for this mess should all be fired. Their services should be disbanded. Networks should be pared back to the barest transparent physical infrastructure possible on which fully open zero-trust overlays can operate. Academia needs a crash diet.

“ICT must become the digital equivalent of the library or bookstore.”But that does not mean reducing the role of people. If anything we need to hire more, and better personnel as the toxic tech is turfed out. Most of the students are already at a higher level of technical understanding, and so obtaining ICT resources should be treated like buying textbooks from the university bookstore. ICT must become the digital equivalent of the library or bookstore.

Digital literacy and self-sufficiency for academics and students should become a priority objective again. Budgets can be devolved accordingly, and foundational courses taught to students who need a top-up on digital self-care.

Only then will we be able to see what is ready to be repatriated, brought back on-prem, and hire worthy specialists to provide those services internally. For example, a university data store that looks essentially like Dropbox and using micro-payments to manage quotas. Or a university email provider, properly separated from other concerns and carrying a minimal burden of “policies”. These could be run by recent graduates or, as I did at UCL in the 1980′s by good students needing a part-time job.

“Only then will we be able to see what is ready to be repatriated, brought back on-prem, and hire worthy specialists to provide those services internally.”Building carefully subsidised internal markets for healthy home-grown tech is a possible way to extricate from the jaws of Big-Tech and to build local competence again.

There are very few places that this could work, but the university is one. Because even if universities are now corporate entities on the financial level they cannot possibly function as corporate entities on the technical operational level and preserve their objectives. The almost total failure of supportive digital technologies within academia now stands as ample proof of that.

12.30.22

GAFAM Against Higher Education: Fixing the Broken Academy

Posted in Free/Libre Software, GNU/Linux, Google, Microsoft at 12:14 am by Guest Editorial Team

Guest post by Dr. Andy Farnell

In this mini-series:

  1. GAFAM Against Higher Education: University Centralised IT Has Failed. What Now?
  2. GAFAM Against Higher Education: Toxic Tech
  3. YOU ARE HERE ☞ Fixing the Broken Academy
  4. GAFAM Against Higher Education: Digital Crash Diet

Andy Farnell's Digital VeganSummary: “I was being polite, and writing in a moderate style appropriate for The Times,” Dr. Farnell stresses. “The truth is much harder.”

In my summing up of inappropriate technologies that blight higher education, I previously claimed that the primary cause is lack of joined-up understanding. I said that we should re-examine the power to shape academic life accidentally handed to non-academic faculties such as ICT, security and compliance teams.

“Many of these trajectories are beyond reform. They have become societal issues that even governments are struggling to address.”I was being polite, and writing in a moderate style appropriate for The Times. The truth is much harder. Many of these trajectories are beyond reform. They have become societal issues that even governments are struggling to address.

What is happening in universities reflects a global trend. However, it’s the job of universities is to resist that. The trend is “technological ignorance”. A harsh fact is, digital technology is making us stupid at a tremendous rate.

The greatest violence in the world is ignorance, and if universities are anything at all, they are by definition the natural enemy of the ignorance companies like Microsoft, Facebook and Google are offering us – a descent into passivity and dependence. Universities have survived historical attempts at dissolution, but those threats have been external. Unhealthy technology gets into the marrow of our institutions.

“In a pathological rush toward centralisation and scale institutions have grown by ingesting food that has the sugar coating of “efficiency and control”, both of which are toxic except in small amounts.”In Digital Vegan I offered a different perspective on technology, not as a tool, but as a food. Healthy technology does not make us bloated and slow like the heavily processed junk-food of Big-Tech.

In a pathological rush toward centralisation and scale institutions have grown by ingesting food that has the sugar coating of “efficiency and control”, both of which are toxic except in small amounts. This fat (over-systematisation, security, silos, AI, central portals) accumulates around the institutional organs. A defensive reaction against information overload, plus a paranoid drive to hide or abstract organisational workings, then blocks our communication pathways.

Soon all problems, even fatal ones are hidden from top management. Oblivious managers lie about things being all-well. Systems of metrics, surveillance and modelling lie too, because the entire organisation is now mobilised around making them lie. The organisation becomes fat, dumb and happy. But junk technology is not made to nourish and satisfy. Digital solutionism means always consuming more. The next update. The next security fix.

“Once upon a time being a university sysadmin was a high accolade. Few jobs were as challenging and diverse.”Returning to the question of what can be done, I will go much further here; In academia, the conceits of centralised network governance and common policies have failed. Spectacularly. They are a race to the bottom of cheaply outsourced junk-food that bleeds control from those who should hold it. Most of all there is a profound competence problem, which companies like Microsoft and Google are exploiting to the hilt.

Once upon a time being a university sysadmin was a high accolade. Few jobs were as challenging and diverse. The ability to install, configure and run a mail server, multiple web servers and a network with thousands of nodes and thousands of password logins was a badge of professional pride. It meant running a heterogeneous network of Sun, Silicon Graphics, Apple, Windows, and specialised hardware while supporting academics in their selection, installation and self-directed usage of diverse software. Professors in the maths, physics, economics and computing departments would regularly write and deploy their own software! Like a good librarian, even if the sysadmin did not understand all those subjects, she at least had to be able to talk to the academics.

“The disconnect between the official theoretical syllabus and daily practice is immense. Today my university could not afford to hire my own graduates for roles currently occupied by people I would fail if they were my students.”Today that role is unrecognisable. Not because technology “got better”, but because we all got a lot dumber and more dependent on click-box cloud technology. We don’t own or really understand it now. We have a shrugging, negative permissions culture. The first position is to assume nothing can be done.

The disconnect between the official theoretical syllabus and daily practice is immense. Today my university could not afford to hire my own graduates for roles currently occupied by people I would fail if they were my students.

Much of what we teach is in fact obsolete because, if the standards of our own institutions are anything to go by, nobody actually needs to know how anything really works. The reality is they’d be better off doing a Microsoft Azure, Amazon AWS or Google Cloud certificate for a tenth of the price and spend the rest of their careers clicking on drop-down menus with meaningless brand names. The skill-set of educational ICT has been eviscerated.

“What that means is that it no longer the academics who decide what research and teaching can or cannot happen. Nor its it deans and vice-chancellors. It is Microsoft and Google.”Most egregiously, the highest levels have been staffed not by experienced administrators with an understanding of the demands and complexities of a university network, but by “industry dropouts” who bring toxic corporate buzzwords and hostile values into an institution that requires curious, tactful consultation, openness, trust and cooperation.

What that means is that it no longer the academics who decide what research and teaching can or cannot happen. Nor its it deans and vice-chancellors. It is Microsoft and Google. Their minions, installed within our universities are now in control.

12.29.22

GAFAM Against Higher Education: Toxic Tech

Posted in Deception, Free/Libre Software, Google, Microsoft at 12:05 am by Guest Editorial Team

Guest post by Dr. Andy Farnell

In this mini-series:

  1. GAFAM Against Higher Education: University Centralised IT Has Failed. What Now?
  2. YOU ARE HERE ☞ GAFAM Against Higher Education: Toxic Tech
  3. GAFAM Against Higher Education: Fixing the Broken Academy
  4. GAFAM Against Higher Education: Digital Crash Diet

Andy FarnellSummary: Dr. Farnell tells tales he has encountered in academia, seeing that the infrastructure and the software are being outsourced to companies such as Microsoft

To avoid parroting the earlier article I’ll more quickly summarise these areas and then move to speak about changes.

Disempowering technologies take away legitimate control from their operator. For example, not being able to stop a Windows upgrade at a critical moment. I’ve quit counting the classroom hours lost to inefficient and dysfunctional Microsoft products that ran amok out of my control while students stared out the window.

“I’ve quit counting the classroom hours lost to inefficient and dysfunctional Microsoft products that ran amok out of my control while students stared out the window.”A graduate of mine started work on an NHS support team for IT and security. I made a joke about Windows running an update during a life or death moment in the emergency ward. He looked at me with deadly seriousness, to say “you think that doesn’t happen?”

Entitled seizure of the owner’s operational sovereignty is one aspect of disempowerment. Another is discontinuation. The sudden removal of support for a product one has placed at the centre of one’s workflow can be devastating. Google notoriously axe services in their prime of life. Weep at the headstones in the Google Graveyard.

Students education is suddenly “not supported“. They experience risks from software with poor long-term stability – something large corporations seem unable to deliver. By contrast my go-to editor and production suite Emacs is almost 50 years old.

“…my go-to editor and production suite Emacs is almost 50 years old.”
Dehumanising devices that silence discourse operate at the mundane level of issue ticketing and “no-reply emails”. But more generally, dehumanising technology ignores or minimises individual identity. It imposes uniformity, devalues interpersonal relations and empathy.

When unaccountable algorithms exclude people from services – because their behaviour is deemed “suspicious” – it is not the validity of choices in question, rather the very conceit of abdicating responsibility to machines in order to make an administrator’s job more “convenient”.

So-called “AI” systems in this context are undisciplined junkyard dogs whose owners are too afraid to chain them. Is it even debatable that anyone deploying algorithms ought to face personal responsibility for harms they cause, as they would for a dog that bites?

In other dehumanising ways, enforced speech or identity is as problematic as censorship or disavowal. So technology fails equally as a drop-down form forcing an approved gender pronoun, or as automatic “correction” of messages to enforce a corporate “speech code”. Sorry computer, but you do not get to “correct” what I am.

“In other dehumanising ways, enforced speech or identity is as problematic as censorship or disavowal.”Systems of exclusion proliferate on university campuses, which are often seen as private experimental testing-grounds for “progressive” tech. Software developers can be cavalier, over-familiar and folksy in their presumptions. A growing arrogance is that everyone will choose, or be forced to switch to their system. Yet if they are anything universities ought to be a cradle of possibility, innovation and difference. They are supposed to be the place where the next generation of pioneers will grow and go on to overturn the status-quo.

That fragile creativity and diversity evaporates the moment anyone assumes students carry a smartphone, or a contactless payment card for the “cashless canteen”. Assumptions flatten possibility. Instruments of exclusion always begin as “opportunity”. Callous “progressives” insist that students “have a choice” while silently transforming that choice into assumptions, and then assumptions into mandates.

Destroying “inefficient” modes of interaction, like cash and library cards that have served us for centuries, gives administrators permission to lock their hungry students out of the refectory and library in the name of “convenience”. They are aided by interloping tech monopolies who can now limit access to educational services when administrators set up “single-sign-in” via Facebook, Microsoft, Google or Linked-In accounts. Allowing these private companies to become arbiters of identity and access is cultural suicide.

“Allowing these private companies to become arbiters of identity and access is cultural suicide.”Systems of extraction and rent-seeking are also flourishing in education. Whether that’s Turnitin feasting on the copy-rights of student essays, or Google tracking and monitoring attention via “approved” browsers, then serving targeted advertising. Students are now the product, not the customers of campus life.

The more we automate our student’s experience the more brutal it gets. Systems of coercion attached to UKVI Tier-4 attendance monitoring seem more like the fate of electronically tagged prisoners on parole. How anyone can learn in that environment of anxiety, where a plane to Rwanda awaits anyone who misses a lecture, is hard to fathom 1.

Gaslighting and discombobulation is psychological warfare in which conflicting and deliberately non-sequitur messages are used to sap morale, undermine confidence and sow feelings of fear, uncertainty and doubt.

That could hardly be a more fitting description of university administrators and ICT services whose constant mixed messages and contradictory policies disrupt teaching and learning.

“That could hardly be a more fitting description of university administrators and ICT services whose constant mixed messages and contradictory policies disrupt teaching and learning.”We must inform all students by email – except where that violates the “bulk mail” or “appropriate use” policies. Staff should be readily available to students, except where it suits ICT to disable mail forwarding. We are to maintain inclusive and open research opportunities, except where blunt web censorship based on common keywords thwarts researchers of inconvenient subjects like terrorism, rape, hacking or even birth control.

Time-wasting technologies are those that force preformative make-work and bullshitting activities. They offer what Richard Stallman calls “digital disservices”. For example; copying data, row by row, from one spreadsheet to another might be justified in an air-gapped top-secret facility. It is unacceptable where administrators, following a brain-dead “policy”, have stupidly disabled copying via some dreadful Microsoft feelgood security “feature”. This is the kind of poorly thought out “fine grained” drudge-making security that Microsoft systems seem to celebrate and the kind of features that power-hungry, controlling bosses get moist over. It is anti-work. This lack of trust is grossly insulting to workers toiling on mundane admin work under such low security stakes.

“…my university-approved Microsoft Office365 running on a Google Chrome browser seems designed to arrest my focus and frustrate all attempts to concentrate.”
Technologies that distract are pernicious in education. Nothing saps learning more than tussling for the attention of students and staff as they try to work. Yet my university-approved Microsoft Office365 running on a Google Chrome browser seems designed to arrest my focus and frustrate all attempts to concentrate. Advertisements and corporate spam have no place in my teaching workflow, so I refuse to use these tools which are unfit for purpose.

Finally, only the military is guilty of more gratuitous waste than academia. To see garbage skips filled to the brim with “obsolete” computers, because they will not run Windows 11 is heartbreaking. Crippled at the BIOS level, they are designated as e-waste due to the inability of IT staff to use a simple screwdriver to remove hard-drives containing potential PII. Meanwhile students beg me for a Raspberry Pi because they cannot afford the extra hardware needed for their studies.

Footnotes:

1

Except for those overseas students who might appreciate a free
flight back home for Christmas.

12.28.22

GAFAM Against Higher Education: University Centralised IT Has Failed. What Now?

Posted in Free/Libre Software, GNU/Linux, Google, Microsoft, Servers at 12:03 am by Guest Editorial Team

Guest post by Dr. Andy Farnell

In this mini-series:

  1. YOU ARE HEREGAFAM Against Higher Education: University Centralised IT Has Failed. What Now?
  2. GAFAM Against Higher Education: Toxic Tech
  3. GAFAM Against Higher Education: Fixing the Broken Academy
  4. GAFAM Against Higher Education: Digital Crash Diet

Andy FarnellSummary: Today we commence a 4-part series about what has happened to British universities (probably not only universities and not just in Britain either), based on an insider, a visiting professor at several European Universities

An article I wrote for the Times HE on “Eliminating harmful digital technologies in education” generated some attention and comments. I’ve been asked “What can we do?” That is to say, I failed to properly address the implied call to arms and merely enumerated the technological problems in education. Smart people want to hear about solutions, not problems.

First I wanted to move the conversation beyond the self-evident and visible, like invasive CCTV cameras, card access systems (and soon phone tracking, fingerprint and face scanners) that give our places of learning all the warmth of a Category-A high-security facility for child sex offenders.

“Smart people want to hear about solutions, not problems.”This isn’t necessary. Visiting London I sometimes wander into the Gower Street quad to enjoy a coffee with my Alma Mater. In University College London, it’s possible and pleasant to wander the halls to reminisce. There are not too many cameras to spoil the architecture and security is still handled by the famous maroon jacketed Beadles. UCL seems to blend seamlessly into the leafy squares of Bloomsbury accommodating many buildings with open doors and welcoming receptionists. By contrast, other universities have degenerated into carceral gulags, accessible only by appointment, through turnstiles and scanners and patrolled by black-clad goonies.

Certainly we must keep reminding the world that a digital dystopia is inappropriate in the context of teaching and learning. Offensive technology must not be allowed to fade into the background, to become normalised, quiescent and acceptable.

But these are only the visible manifestations of a deeper malaise. Drifting from a public good into the waters of brutal corporate values, the academy – lured by the siren song of a security industry – has marked its own students as pirates and brigands.

One backwater university began blocking students from forwarding mail from their institutional Microsoft accounts to their personal inboxes, on the grounds that they might “exfiltrate teaching materials”. In a world where MIT and Stanford put their best courses online for free it beggars belief what goes through the minds of ICT staff so cloistered and divorced from core functions.

“Drifting from a public good into the waters of brutal corporate values, the academy – lured by the siren song of a security industry – has marked its own students as pirates and brigands.”Of course, in the name of fairness the same implied criminality and untrustworthiness is extended to staff. Anyone trying to run labs or prepare teaching materials for microelectronics, IoT, web technology, or cybersecurity, must face stiff resistance to any non-Microsoft activity that cannot be brought under boot of centralised surveillance.

I wonder, other than digital rights researchers like myself; who else is watching this death spiral in the academy? College unions like the UCU and NUS (student union) seem to have little or no awareness of the digital rights abuses perpetrated against staff and students in our universities under the banners of “security” and “efficiency”.

“It serves everyone but the key stakeholders in education; lecturers and students.”Offensive technology serves the chancellors, trustees, landlords, governments, industries, advertisers, sponsors, technology corporations, suppliers and publishers. It serves administrators who believe technology will deliver fast, efficient, uniform, accountable, secure, and most of all cheap education. It serves everyone but the key stakeholders in education; lecturers and students. The cost of draconian over-monitoring is that it corrodes our ability to teach and learn as fully human beings.

But again, monitoring and obstruction are only two aspects of the technological menace facing teaching. I was asked to look at all forms of harmful technology, and these cannot be located in specific systems or policies, Instead I enumerated broad categories of harm, namely technologies that;

  • disenfranchise and disempower
  • dehumanise
  • discriminate and exclude
  • extract or seek rent
  • coerce and bully
  • mislead or manipulate

On reflection I would add a few less general harms to the original Times HE list, being technologies that;

  • distract
  • waste time
  • waste resources
  • gaslight and disturb

« Previous entries Next Page » Next Page »

RSS 64x64RSS Feed: subscribe to the RSS feed for regular updates

Home iconSite Wiki: You can improve this site by helping the extension of the site's content

Home iconSite Home: Background about the site and some key features in the front page

Chat iconIRC Channels: Come and chat with us in real time

New to This Site? Here Are Some Introductory Resources

No

Mono

ODF

Samba logo






We support

End software patents

GPLv3

GNU project

BLAG

EFF bloggers

Comcast is Blocktastic? SavetheInternet.com



Recent Posts