First off, I did not read all of the proposal but the small part that I read does not sound as strict as others make it sound.
There are obvious reasons why the chat control proposal (as analyzed by Patrick Breyer) is a wrong idea and autocratic (or conservative) in some way. (The current president of the EU Commission is from a German conservative notorious party btw.) From one perspective, it looks like imitating China, a big surveillance state. It conflicts with other case laws made by a European Court.
But the initial claim of threatening open-source operating systems is simply wrong or far-fetched.
I also read the link in the original post. The statement likely gained criticism quickly. The author added a disclaimer section at the start that disclaims what the author actually wrote in the title.
And "totalitarian control of all private communication" is certainly incorrect to state. The chat control law also doesn't give reason to the statement that the "government" is "eavesdropping" on all private communication.
The law actually intends that all online communication shall be screened by machines and reported to law enforcement if something "creepy" is found (even though going too far at other points). It does not mention explicit censorship or control over your data. Obviously, the child abuse content targeted by chat control is indeed bad, there is no debate! It doesn't require manual screening at all (which is not feasible either), so nobody actually gets to see your messages or photos unless they are reported and then, it's also just maybe one person who likely will discard it if your content does not violate existing laws. Apparently, due to the first link you provided, it doesn't even require automatic screening to be effective at all.
But I really agree that the strict rules that the original post mentions for "software application stores" would be "crazy" and non-sense and would look like being proposed by people who know nothing about non-commercial technology. Under these circumstances of missing the declared goal or scope with the proposed measures, I see, how people want to insinuate evil intentions. However, it does not put them on solid ground.
There are many other big problems. One big problem is, machine learning models are unreliable (they are stochastic and therefore necessarily imprecise) and necessarily biased due to human supervision, inheriting human bias.
The proposed solution also is a ridiculous overkill measure in comparison to the size of the crime. Massive amounts of energy are wasted to find almost nothing most of the time, looking at places most of the time where nobody would expect any offense. The European Commission could spend more resources on a solution for ransomware, spam, phishing, scams, data abuse by companies (and prize games), malware protection or even telephone terror from call centers.
It does not make sense to suspect the generality of crime for no reason (just because of a small number of idiots).
It is possible that machine learning models would confuse adult female people (particularly those with image filters) with children. There are actrices that have some kind of child schema in their face. I am also thinking about fictional works such as Japanese Manga depictions which, in some cases, could be reported by screening software without falling under the crime targeted by the chat control law. [Even though the level of ephebophilia in such Japanese media can certainly be questioned morally sometimes.]
But everyone who knows a bit about machine learning should know that you can easily deceive machine learning models by adding noise to your picture. When you put enough (or specific) noise onto any "evil" image – which is easy to do in any programming language – then the machine learning algorithm might not recognize any reasonable thing while an ordinary person would still recognize the content of the image.
Central ideas of the proposal do not enjoy acceptance by EU citizens anyway.
In my view, the biggest danger actually is an (unofficial) prohibition or prevention of communication between humans at all, particularly between "minors". A high potential of danger would be if the wording is so broad that chat control also applies to chat-unrelated topic-specific communication or things unrelated to addressing children in any way (such as comments on online help videos of adult YouTubers, blog posts, version control system comments, change logs and comments on software updates/changes). But the proposal expressly sounds like they want to rule out such cases.
The extraordinary measure of prohibiting communication (features) would not be reasonable. But even though it could sound dystopian, it does not mean per se that open source is forbidden by the proposal. It's rather that certain application communication features could be hidden from the users or removed entirely. It could empower big giant companies which can spend money on Machine Learning and could destroy the usability of software by prohibiting chat features. Big corporations such as Meta, Google or Apple probably see a chance here.
If such a chat control law would be successful what would be the consequence for future laws? Would they think about prohibiting insults and negative talking as well? There would be good reasons to censor (very) bad talk, cyberbullying, discrimination against others and verbal violence, and no doubt, in some cases, verbal violence can be lethal or cause psychological damage.
In the extreme, it reminds me of the Grammarly Chatbot that is designed to not permit or say anything negative, going as far as censoring (or rather crashing on) generated reports of historical events. It is actually very successful in removing any negativity from its output. It would be kind of dire if such a chatbot system would determine what you are allowed to write and what not. Any logical system without negativity is incomplete.
But I think, products would be safe by restricting the possible set of messages to a finite set. As a notable example, Nintendo has been doing that for all of the Mario Kart online services, where you only can communicate using messages from a fixed set of messages.
But even in the worst case, the open-source community would have an answer to this, even if it's just a free fake filter that everyone can use and that pretends to do something but actually doesn't recognize anything. The main effect however is to bloat software and make software slower.
From a sober perspective, the chat control instrument is already voluntary and already implemented by some major messaging providers. Also, many forums already support some kind of screening, looking for illegal words. Only the screening part might be less harmful and much less interesting than it actually sounds.
I can only recommend everyone to not create fears, hate or hefty speculation about the proposal or the outcome. A proposal is not a law and is not something that is sent to the parliament for ratification. I think the current proposal of the Commission cannot be successful and typically, the parliament also will make changes to the proposal of the EU Commission which does provide topics but does not make laws by itself.
On the other side, the EU is a very successful and outstanding institution. Maybe you did not notice it but one month ago, the EU successfully passed a law of due diligence in supply chains, finally!, and I am almost proud or kind of happy about this. It's not very strong but it's a strong signal that crime against human rights for economic reasons is not tolerated anymore by the standards of the EU.