Brabant AI Show seeks a middle ground between hype and fear
BrabantKennis delivered an evening that was at once a talk show, a future exploration, satire, and a crash course in digital morality.
Published on April 16, 2026
Bart, co-founder of Media52 and Professor of Journalism oversees IO+, events, and Laio. A journalist at heart, he keeps writing as many stories as possible.
With sharp conversations, playful doomsday scenarios, and practical lessons in “AItiquette,” AI on Wednesday evening was not presented as a miracle cure or a downfall, but as a societal choice on which Brabant must already form an opinion.
“Major technological changes are always accompanied by imagination. Dream and doomsday images are used interchangeably.” With those words, Eef Berends immediately sets the tone in a well-filled Hofnar for The Big Brabant AI Show. Not as a tech demo, not as a policy evening, but as a theatrical search for a question that turns out to be much bigger than technology alone: what do we actually want with AI?
BrabantKennis had chosen a format that was smart enough to provoke and light enough to take a broad audience along. Presenter Eef Berends led the evening with visible enjoyment, director Jos van den Broek acted as sidekick, and Sophie Loonen emerged as the “one and only Brabant AItiquette expert.” In between, guest speakers Bart Wernaart, lecturer in Moral Design Strategy, and Hans de Penning, founder of Neople, navigated the stage. The result was an evening that constantly blended journalistic conversations, challenging statements, ethical assumptions, documentary fragments, and humor.
A green score
The audience is actively involved from the very first minutes. What do you use AI for? Vacation plans? Music? And for what not? Personal information, comes the immediate response from the audience. AI suddenly becomes not something that happens somewhere in Silicon Valley, but something that is already penetrating Brabant lives, choices, and institutions.
We see a fictional future fragment about health, set in 2049, on the big screen. In the images by Jolien van de Griendt and Merlijn Passier, a certain Nadia has to go running to keep her “health score” up. Her running companion drops out, but is unfortunate enough to have ambulances available only to people with a green score. It is amusing, uncomfortable, and just credible enough to be unsettling. A cautious laugh of recognition goes through the room.
Van den Broek immediately gives words to that ambiguity. Such a world does not yet exist, he says, “but we can quite imagine that some elements from this video will fit into our near future, that we are moving in that direction.” That is precisely where the strength of the evening lies: in making the slippery slope visible.
Bart Wernaart interprets this sharply. “Some elements, if you peel it back a bit, we are actually already right in the middle of,” he says. But: “The question is whether you really want to go there, in the end.” According to Wernaart, technology changes not only what we can do, but also how we interact with each other. “You see social interaction changing, you also see the way we relate to knowledge changing.”
His most striking observation concerns the way generative AI likes to affirm us. Referring to recent research, he states that systems like ChatGPT are often inclined to please the user. “That you a) always feel flattered, that you feel comfortable. But b) that you also constantly get a kind of self-confirmation.” Precisely there, he suggests, AI shifts from a tool to a moral mirror that is too friendly to correct.
AItiquette
Yet, the evening never descends into cultural pessimism. That is partly thanks to Sophie Loonen, who, with a delightfully dry etiquette segment, added a comic and surprisingly useful layer to the evening. “AI may be smart, but not wise,” she tells the audience. In short lessons, she summarizes the discussion into rules that are useful for both citizens and policymakers: “Keep thinking for yourself,” “Look beyond the average person,” “Make deviation possible,” “Value more than efficiency,” “Know who makes it,” and “Think in decades.”
Again and again, the same tension returns: AI promises efficiency, speed, and scale, but which values disappear along the way? Loonen: “Efficiency is a value, not a given.”
The second future scenario on the big screen, about a municipal council in which “Democrat AI” has largely taken over decision-making, also works. The image of absent council members whose proxies are automatically transferred to the system is absurd and yet not entirely unthinkable. “Based on this vote, the proposal can be adopted. As agreed in article 34 of the AI Safety Act, you now have 60 seconds to invoke a human veto.”
Bias
Van den Broek points to the risk of “automation bias”: the human tendency to trust systems, especially when they appear to process large amounts of information faster than we do. Wernaart adds that a human button afterward is of little value if you do not understand what happens inside the black box. “You want to understand what happens inside that thing. On what basis does it make its decisions?”
Hans de Penning then brings in the perspective of practice. At Neople, he builds “AI employees” that help companies work more efficiently. His story is therefore less philosophical. What is already happening in companies today, he says, is closer to the fiction we see than many people think. “That a report is presented by an AI model and that we as humans then have to make a decision about it together is already very close to how things work today.”
Yes, De Penning is “extremely optimistic” about the technology. But at the same time, he identifies a new kind of workload: developers who are not burdened less, but differently. “Our guys are actually tired by eleven o’clock,” he says, because they constantly have to check the output of a “small army” of digital assistants, under the constant threat of an “AI burnout.”
Fiction and data
This once again makes the central tension of the evening clear. AI does not simply take over work; it redistributes attention, responsibility, and yes, fatigue. And so, in the end, this show is less about machines than about people. About convenience and group pressure. About public values. About the question of who writes the rules of the game. Based on which aitiquette.
BrabantKennis wisely chose not to seal those questions with answers. There was fiction, there was data, there were ethical rules of thumb, there was even a story about “the woman in the basement,” a literary metaphor for the omnipresence of data-driven systems. But the evening left enough open not to end the conversation, but to start it.
AI was not sold here as an inevitable fate, but as something that citizens, administrators, and companies must actively relate to. Not blindly enthusiastic, not reflexively rejecting.
And that is exactly why this evening worked. Because it did not turn AI into an abstract debate, but into a Brabant-based issue of the here and now.
