<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Practicing Trustworthy AI by Bran Knowles]]></title><description><![CDATA[Reflections on creating technologies for a world we can trust.]]></description><link>https://trustbranknowles.substack.com</link><generator>Substack</generator><lastBuildDate>Thu, 09 Apr 2026 17:15:35 GMT</lastBuildDate><atom:link href="https://trustbranknowles.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Bran Knowles]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[trustbranknowles@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[trustbranknowles@substack.com]]></itunes:email><itunes:name><![CDATA[Bran Knowles]]></itunes:name></itunes:owner><itunes:author><![CDATA[Bran Knowles]]></itunes:author><googleplay:owner><![CDATA[trustbranknowles@substack.com]]></googleplay:owner><googleplay:email><![CDATA[trustbranknowles@substack.com]]></googleplay:email><googleplay:author><![CDATA[Bran Knowles]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA["Trustworthy AI" as Trauma Creator [Introduction]]]></title><description><![CDATA[How AI regulation stabilises narrative while destabilising us]]></description><link>https://trustbranknowles.substack.com/p/trustworthy-ai-as-trauma-creator</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/trustworthy-ai-as-trauma-creator</guid><pubDate>Fri, 20 Feb 2026 14:33:22 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><em>Readers will notice I took an extended break from writing. This was, in large part, due to a bereavement. I do apologise for the interruption, and I&#8217;m grateful to be inspired again. </em></p></blockquote><p>Last I wrote, I was exploring how AI undermines self-trust in ways that increase the difficulty of making autonomous decisions about whether and how to involve ourselves with AI. <a href="https://substack.com/@trustbranknowles/p-177004920">Drawing on McLeod&#8217;s formulation of self-trust</a>, I focused on erosion of people&#8217;s confidence in their own competence and moral integrity. I want to extend this further here, examining the ways that untrustworthy AI erodes more fundamental self-trust in one&#8217;s own perceptions and instincts&#8212;when it creates <em>distrust of distrust</em>. This, I will argue, is a signature of <em>trauma</em>. It is a learned response to an environment characterised by asymmetrical learning: one party emits a signal that they have experienced harm (expresses distrust), the other party responds to the signal in ways that increase harm to the person emitting it, which trains the person that their instinct is harmful. Distrust of threat perception and affective response is locally adaptive in such environments.</p><p>This essay will bring together two lines of thinking from our wonderful Substack universe. The first is <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Vera Hart MD PhD&quot;,&quot;id&quot;:331280082,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:null,&quot;uuid&quot;:&quot;6a2b1f0d-9fd6-485e-af36-c9568d23684e&quot;}" data-component-name="MentionToDOM"></span>&#8217;s beautifully incisive articulation of the nervous system architecture of <a href="https://substack.com/home/post/p-178123013">Trauma Creators vs. Trauma Survivors</a>. I will translate her analysis to the world of AI&#8212;a tool conceived in existential anxiety, a means of taming unpredictability and overcoming interdependence, ultimately solidifying a metaphorical Default Mode Network organised around maximisation of control and omnipotence. Unsurprisingly, those developing and deploying AI for these ends fortify dominance through control work cloaked as compliance, replicating the neural pathways of &#8220;narcissistic&#8221; individuals in very specific ways which I will describe. </p><p>To be clear, in this analogy I am not suggesting that the technology itself is narcissistic, as if it has a personality disorder. AI does not fear powerlessness and construct elaborate defence mechanisms. People do this; and complicating the story (it&#8217;s never straightforward when speaking about &#8220;AI&#8221;), AI can be both the defence mechanism and the thing being defended. For our purposes, I&#8217;m suggesting that we treat the organisational unit as &#8220;narcissistic,&#8221; whether that is the unit developing it or the unit deploying it. And I won&#8217;t go so far as to suggest that all such units are &#8220;narcissistic.&#8221; What I will say, however, is that this &#8220;narcissism&#8221; is structurally enabled by a form of AI governance that <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Sabino Marquez&quot;,&quot;id&quot;:324108415,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/85fee408-b712-44ea-a283-02bf263b5a3e_1024x1024.png&quot;,&quot;uuid&quot;:&quot;9ec8249b-9bd7-4a75-9f66-703f875414e1&quot;}" data-component-name="MentionToDOM"></span> terms &#8220;<a href="https://substack.com/home/post/p-187437660">Synthetic Value Safety</a>&#8221; (SVS). SVS is analogous to the narcissist&#8217;s preoccupation with maintaining their public image: what matters is narrative coherence around &#8220;trustworthy&#8221; and &#8220;responsible&#8221; AI rather than meaningfully addressing harms indexed by distrust. The AI regulatory apparatus is recruited as a witness layer for the narrative, invalidating any perceived harms not surfaced through an inspection framework. </p><p><em>This </em>is the trauma. When distrust does not lead to learning and change within the AI unit, but is instead neutralised with documented evidence of trustworthiness, the distruster learns that the AI unit is not responsive to their pain because their perception diverges from institutionalised &#8220;reality&#8221;. And yet, suppressed pain does not disappear. The nervous system continues to respond to the absence of moral resonance in these AI units: unmetabolised distrust transforms into anguish. Highly affectively charged AI distrust is not pathology (neither over-emotionality nor illiteracy); it is the predictable failure signature of AI governance with a feedback loop that gates distrust, preventing error ingestion and genuine accountability. </p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/subscribe?"><span>Subscribe now</span></a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4608" height="3456" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3456,&quot;width&quot;:4608,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;grayscale photo of person placing hand on face&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="grayscale photo of person placing hand on face" title="grayscale photo of person placing hand on face" srcset="https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1556097549-e2371517ae20?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzNnx8cGFpbnxlbnwwfHx8fDE3NzE1MDkwOTB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@a_d_s_w">Adrian Swancar</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h3>Coming soon&#8230;</h3><p>I am posting the introduction for now to whet your appetite while I work on the rest of the essay. Please stay tuned!</p><p>Any thoughts on this argument so far are very welcome.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/trustworthy-ai-as-trauma-creator/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/trustworthy-ai-as-trauma-creator/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Different "fight languages"]]></title><description><![CDATA[Avoiding triggering even more distrust]]></description><link>https://trustbranknowles.substack.com/p/different-fight-languages</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/different-fight-languages</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Mon, 01 Dec 2025 10:05:28 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This post comes to you out of sequence. In writing about distrust of AI as intertwined with self-distrust, I paused for lunch and watched a video that YouTube was recommending to me.</p><p>I have recently gotten rather obsessed with R&#216;RY, an alt-punk singer/songwriter. (And now you know that my Substack writing is fuelled by alt-punk music, which may provide some insight on tone.) In her other life, where she is known as Rox, she produces light-hearted but informative YouTube videos with her partner, Rich, on ADHD. I was watching because, as a fan, I was interested in her story.</p><p><a href="https://www.youtube.com/watch?v=8sNOgKtyMbc&amp;t=757s">This particular episode</a> was about how they navigate arguments as a couple. Rich, they say at the start, identifies with being &#8220;on the spectrum&#8221;, though has not pursued an autism diagnosis. Rox (officially diagnosed with ADHD) and Rich have very different ways of reacting to conflict. Rox wants to talk and talk and talk, to explain how she feels. This can feel very intense to Rich, who prefers to deal in facts rather than emotions, who feels safer retreating until he can work out what to do. To Rox, his retreating feels like rejection. Each needs different things, and not understanding these differences was intensifying the conflict until they were able to understand each other.</p><p>Rox explains: &#8220;&#8230;now I understand that you have to understand the logic of something before saying sorry. In the early days, that would feel invalidating. &#8216;Well what do you mean?&#8217;, &#8216;What was it I said?&#8217;, &#8216;Why did it land like that?&#8217; I understand that now. So I think when you understand your partner&#8217;s fight language, you can really make the right decisions in the moment. And when you don&#8217;t understand your partner&#8217;s fight language, they are going to trigger the hell out of you.&#8221; </p><p>It occurred to me that this dynamic could be a helpful way of explaining what causes distrust of AI to spiral. This is simply the escalation of conflict.</p><p>Most of the time, distrust of AI is highly affective, the only fitting language is that of emotions. It&#8217;s not so much about the specifics, it&#8217;s everything, swirling around all at once, as I proposed in the <a href="https://open.substack.com/pub/trustbranknowles/p/distrust-of-ai-as-self-distrust?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">last post</a>. On the other side, we have the AI practitioner, who is working as part of a team trying to reason their way to a solution: what did the AI do wrong, what tweak can be made to avoid this distrust in the future? A bag of jumbled up emotions isn&#8217;t actionable, it&#8217;s stressful. They want specifics. They want distrust of AI to be explained in terms of things like bias, which is mathematical, factual.</p><p>Both sides feel triggered. The person who expressed distrust feels invalidated, oversensitive (anger turning inward), rejected (anger turning outward). The practitioner starts to shut down from sensory overload, unable to deal with all of the emotions coming their way, and perhaps feeling unjustly accused (the system is trustworthy according to what we are able to measure!). So communication breaks down even further. There can be no productive conversation about distrust when each side sees the other as either too hot (too emotional) or too cold (emotionless).</p><p>Maybe an important shift in dealing with distrust of AI will come simply from understanding <em>different fight languages</em> and working out what each party needs when experiencing conflict. </p><p>As I&#8217;ve explored <a href="https://open.substack.com/pub/trustbranknowles/p/4-kinds-of-silencing?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">previously</a>, the person expressing distrust needs to see their feelings being given proper uptake. But Rich was listening and trying to understand Rox; he was just coming at it from a logical perspective that felt like a denial of the relevance of how Rox was feeling.</p><p>My point isn&#8217;t that both sides are hurting and both need to be nicer to the other. (In this analogy, I&#8217;m Team Rox all the way.) What I am saying is that sometimes the way practitioners react to distrust can feel like dismissal, when perhaps it&#8217;s not that. Maybe what looks like a challenge is actually the practitioner trying to unjumble the story, to tease it out in an organised way. Maybe what looks like silence is simply processing time, fact-gathering, working-out, strategising. But until the person comes to understand something of the practitioner&#8217;s fight language, these actions can sow further distrust.</p><p>I&#8217;m taking for granted that the practitioner is acting in good faith to address distrust. (This isn&#8217;t always true, and for those other instances I have different advice.) But my point is that fights can still escalate unless greater attention is given to the choreography of handling conflict. In the end, no matter how the practitioner does the work of addressing distrust, they have to make the distrusting person feel understood, respected, like their emotions are important; but they also have to make it clear to the distrusting person what they need and what their patterns are so that their actions become legible as repair to the other person.</p><p>In practice, this implies a much more dialogic approach to trust-building. As much as we talk about trust, we tend to neglect the real work of being <em>in relationship</em>. When a customer or trust buyer comes with a concern, you can&#8217;t earn their trust until you learn to speak their fight language (what they need to start to feel safe again); and they can&#8217;t understand your reactions until you clarify your fight language, the things you are doing to create that safety, to fix the situation. </p><p>This is basic stuff. But it&#8217;s amazing how easy it is to forget how to be in trusting relation when it comes to customer-vendor relationships. Sometimes it&#8217;s worth saying the obvious thing, going back to basics.</p><p>To say it&#8217;s basic is not to say it&#8217;s easy. This relational choreography takes work to master. For one, every AI practitioner will have their own personal fight language, their own reaction to conflict. Some might have to work at depersonalising distrust&#8212;to see the conflict as productive friction, not a personal attack.</p><p>The bigger problem I see, however, is trying to scale up this relationship work. Two things that work against healthy choreography at larger scales are that: 1) reactions are often untethered from specific product, so distrust is likely to become even more emotionally jumbled; and 2) data management entails flattening, decontextualising, abstracting, creating distance from the complainant/originating complaint. So when, for example, the public expresses distrust of the growing use of AI across government departments, what would a trust-promoting response entail? </p><p>This is an open question, and I&#8217;d be interested to hear what others think. But I have to imagine it begins with a serious and conspicuous effort to understand what the public needs to feel safe; it involves a transparent inventorying of the work that is being done to ensure trustworthiness and how it maps to real people&#8217;s specific concerns; and it carries on in an iterative way that demonstrates commitment to this relationship.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/different-fight-languages/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/different-fight-languages/comments"><span>Leave a comment</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4256" height="2832" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2832,&quot;width&quot;:4256,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;girl in blue sleeveless dress&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="girl in blue sleeveless dress" title="girl in blue sleeveless dress" srcset="https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1598316560463-0083295ca902?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8ZmlnaHR8ZW58MHx8fHwxNzY0MDI5NjI5fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@obiefernandez">Obie Fernandez</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[Distrust of AI as self-distrust]]></title><description><![CDATA[Part 1: Why we need self-trust for autonomous decision making]]></description><link>https://trustbranknowles.substack.com/p/distrust-of-ai-as-self-distrust</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/distrust-of-ai-as-self-distrust</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Mon, 24 Nov 2025 14:46:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MLPP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I took a little break from writing, which I think I needed. To get back into this whole Substack thing, I reread my earlier posts. In doing so, it struck me just how many different things I was claiming distrust of AI is, and how frustrating this would be to a mindset that wants to design it away. </p><p>Of course, that&#8217;s never been my aim. What I&#8217;m interested in is the complexity of human experience, in what it feels like to inhabit a sometimes failing body, to be tormented by a mind, to feel somatically, to have good days and bad days&#8230; and to encounter AI in and amongst this experience, and to have reactions to it. Human experience is not this or that; it&#8217;s this and this and this and this all at once, and tomorrow it&#8217;s that and that and that and that.</p><p>So can distrust of AI be disgust, resentment, anxiety, and disappointment as I proposed in my Doppelganger series? Absolutely. Can it also be about not feeling heard, not feeling respected, not feeling safe, as I proposed in my series on moral character? Sure. Can it be provoked by clunky interaction, flawed outputs, and bias, &#8230;<em>and</em> historical injustice, the failings of capitalism, guilt at our complicity, and our own lived experience of relational trauma? I&#8217;m sure it has to be! Because this is how we all show up to our encounters with AI. We are human. We are complex.</p><p>With that, in this post I will present another way of understanding relationship with AI. Thus far, and like most people, I&#8217;ve treated trust and distrust of AI as a relationship between a person and &#8220;AI&#8221;&#8212;though hopefully I&#8217;ve made it clear that &#8220;AI&#8221; can mean the tool itself, the people behind the tool, the myth, the power dynamic (again, potentially all at once). But are we still oversimplifying? Are we denying a key part of what it means to be human? </p><p>In going about my life, I trust certain things, certain people, and distrust others; but I&#8217;m constantly assessing and reassessing my trust, doubting it, deferring it&#8230; being triggered, trying to make sense of the upending of stability, putting things back together, deciding how much I trust myself today. How I feel about anything, including AI, depends a great deal on how I feel about my feelings. So my inclination to trust or distrust AI will depend on how much I trust myself to form the right attitude.</p><p>I want to propose that we start to see (dis)trust of AI as both predicated on a prior relationship with self and implicated in the evolving of that relationship. Not so much &#8220;X trusts Y&#8221;, as &#8220;X trusts X&#8217; to trust Y&#8221;. In this formula, a person&#8217;s trust in themselves (X trusts X&#8217;) informs their trust of the AI (X trusts Y); but so does the their relationship with AI (X trusts Y) inform their trust of self (X trusts X&#8217;). Or something like that. All the real philosophers out there can correct the formula.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MLPP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MLPP!,w_424,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 424w, https://substackcdn.com/image/fetch/$s_!MLPP!,w_848,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 848w, https://substackcdn.com/image/fetch/$s_!MLPP!,w_1272,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 1272w, https://substackcdn.com/image/fetch/$s_!MLPP!,w_1456,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MLPP!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif" width="640" height="476" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:476,&quot;width&quot;:640,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Betty White Math GIF - Betty White Math Calculating - Discover &amp; Share GIFs&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Betty White Math GIF - Betty White Math Calculating - Discover &amp; Share GIFs" title="Betty White Math GIF - Betty White Math Calculating - Discover &amp; Share GIFs" srcset="https://substackcdn.com/image/fetch/$s_!MLPP!,w_424,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 424w, https://substackcdn.com/image/fetch/$s_!MLPP!,w_848,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 848w, https://substackcdn.com/image/fetch/$s_!MLPP!,w_1272,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 1272w, https://substackcdn.com/image/fetch/$s_!MLPP!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bf4e683-cfc9-4158-a822-6c52155ab191_640x476.gif 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(Photo credit: <a href="https://www.google.com/url?sa=t&amp;source=web&amp;rct=j&amp;url=https://tenor.com/view/betty-white-math-calculating-confused-golden-girls-gif-27641696&amp;ved=2ahUKEwjV4P3r_IqRAxU-VUEAHc2oACcQjhx6BQiTARAa&amp;opi=89978449&amp;usg=AOvVaw1G2BrDxRLcTugkKtD_GUc-">here</a>.)</p><p>Let&#8217;s get into it. I turn now to Carolyn McLeod, who spent an entire book thinking through this matter. It&#8217;s called <em>Self-Trust and Reproductive Autonomy</em>, and it will ground my next few posts.</p><h3>Introducing McLeod</h3><p>McLeod is concerned that doctors have a great deal of power to affect a patient&#8217;s self-trust and, in turn, undermine their ability to make autonomous decisions about their reproductive health. She has chosen a context where vulnerability is clear, and where, therefore, trust is evidently salient. But what&#8217;s also important to understand about the context, as she shows, is that it inherits a number of cultural stories: </p><ul><li><p>About what it means to be a woman&#8212;both how women have long been defined in relation to their reproductive role, and how their credibility as witnesses to their own bodily experience has been systematically undermined (for more on this, I would again have to recommend the book Unwell Women, by Elinor Cleghorn).</p></li><li><p>About the superiority of certain ways of knowing, and of technologies as ways of measuring.</p></li><li><p>About professionalisation, which bestows doctors a particular authority.</p></li><li><p>About consent, and which procedures are considered ethically adequate.</p></li><li><p>And about the patient, the stories they tell themselves about themselves (all bound up with categorisation and cultural stereotypes).</p></li></ul><p>Do I trust myself depends on how I relate to these different stories, which of them I give weight to; and these stories are animated through my experiences with healthcare, they become the basis for new stories I tell myself.</p><p>McLeod explores several situations that might arise within reproductive health, including infertility, pregnancy, and miscarriage. In any of these, the doctor will explore various tests that can be performed or treatments available and will present information about success rates and risks. (One of McLeod&#8217;s frustrations is that the emotional costs of any treatment are not given the space they deserve within such discussions.) The doctor will then require the patient to choose what they want to do&#8212;will they undergo IVF treatment, for example; will they have amniocentesis; will they have genetic testing; etc. These decisions can be fraught for women, who are not only choosing for themselves, but for their partner and their unborn child or future children. If they decline available medical treatments, will they later be blamed if something goes wrong?</p><p>Clearly it&#8217;s important for the women in these situations to trust their medical practitioners, and plenty has been written about this. But as McLeod notes, &#8220;in situations of vulnerability it is important not only that we can trust others, but also that we can trust ourselves to stand up for our own interests and for what we value most&#8221; (p. 1). And as she shows, medical consent is not really designed to preserve moral integrity. Patients are pressured into making decisions before they have the information they need to know whether it aligns with their interests and values. They may make decisions that in hindsight they wish they hadn&#8217;t, and thus come to doubt their own decision making capabilities. Over time, this leads to even less autonomous decision making within the reproductive healthcare journey. </p><p>There is plenty more to say about McLeod&#8217;s formulation of self-trust, which deserves some careful unpacking. But for this basic premise&#8212;that lazy approaches to consent can undermine self-trust and autonomy&#8212;it is easy to find parallel in the realm of the digital&#8230; </p><h3>Re-reading &#8220;hopeful trust&#8221; as a symptom of lack of self-trust</h3><p>(Sometimes as an academic you want to rewrite a paper. Not because it&#8217;s not good, but because you start to see another narrative&#8230;)</p><p>In <a href="https://open.substack.com/pub/trustbranknowles/p/hopeful-trust?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">previous posts</a> I discussed a paper I wrote called Un-paradoxing Privacy, which argued that the relationship between trust and privacy is more complicated than the literature presupposes. The prevailing logic is that trust reduces privacy concern and that privacy preserving policies promote trust; but the point is that both are involved in a person&#8217;s mental calculus in deciding whether to use a service. My position was that neither trust nor privacy are practical concerns at the moment of consent, but that this does not mean trust (or privacy) is irrelevant. Instead, trust is retrospective; it helps people morally account for their behaviour in privacy paradox conditions. In other words, when people see themselves disclosing more information than they would given their privacy preferences (their values), the way they make sense of this is by thinking, &#8220;This must be because I trust the service.&#8221; This trust is characteristically &#8220;hopeful&#8221; in that it defies evidence of untrustworthiness: a person hopes that the service is more trustworthy than it is (what they know deep down) because what choice do they have? It is too uncomfortable to live with the feeling of having placed trust in an untrustworthy party and to not be able to take it back. Hopeful trust is self-preservation.</p><p>Another interpretation, however, is that this behaviour is a sort of learned helplessness. Participants described feeling like they had no choice but to &#8220;trust&#8221;&#8212;their privacy was being violated all the time by companies who have turned personal data into a saleable commodity. This is the world we live in now. This is reality. You still need to get stuff done. As with the dogs in Seligman and Meyer&#8217;s seminal psychology study, you soon learn you can&#8217;t really escape the shocks and so you lay down.</p><p>But what strikes me now is that participants were mostly concerned not with the idea that there was no escape, but that they couldn&#8217;t trust themselves to do the work of escaping. (Maybe this is what happens in other cases of learned helplessness, but it&#8217;s not how it&#8217;s typically explained.) Part of this was a belief that they didn&#8217;t really know how to self-manage their data. But more than anticipating letting themselves down (i.e. being merely unreliable), they knew that they would <em>betray</em> themselves (i.e. they are untrustworthy): they were still going to use systems even when they violated their values. In other words, they distrusted their moral integrity. </p><p>What this example shows is, just as McLeod argues, how structures that undermine autonomy erode self-trust&#8230; which further undermines autonomy. To quote McLeod:</p><ul><li><p>&#8220;People who have autonomy reflect on what they truly believe and value, and they act accordingly&#8230;&#8221; These participants were admitting they didn&#8217;t have freedom to consider what they truly believe and value, nor did it matter, it didn&#8217;t inform their decision to use these services.</p></li><li><p>&#8230;&#8220;They are also competent and committed to engage in such reflection and to act on the results. Furthermore, they have a positive attitude toward their own competency and commitment&#8230;&#8221; The participants explained how difficult it was for them to work out how any terms they were consenting to might affect them, that they couldn&#8217;t figure out how to act to best preserve their privacy.</p></li><li><p>&#8230;&#8220;In other words, they trust themselves to make an autonomous decision&#8221; (p. 103). The way they accounted for their paradoxical behaviour was to insist that they don&#8217;t have any autonomy anyway.</p></li></ul><h3>This is hardly the whole story</h3><p>For this post, I&#8217;ve only just scratched the surface of how self-(dis)trust is implicated in (dis)trust of AI. I&#8217;ve mainly focused on the consequences of lack of self-trust, how it plays out in practical decisions to use technologies. So I have set the stage for why it matters. But what I&#8217;d like to do going forward is to re-examine the self-trust dimension of a number of other issues I&#8217;ve raised in my Substack posts&#8212;to begin to see, as I said at the start, how distrust is at once both about the AI (in the all-encompassing way I use it) and about the self. As with McLeod, who wanted doctors to gain more self-awareness in how they affect patient self-trust, my goal is to help sensitise AI practitioners to the fact that AI affects self-trust in ways that are harmful to the individual as well as to the human-AI relationship.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h3></h3><p></p>]]></content:encoded></item><item><title><![CDATA[Self-Trust with AI]]></title><description><![CDATA[Thoughts in progress]]></description><link>https://trustbranknowles.substack.com/p/self-trust-with-ai</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/self-trust-with-ai</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Fri, 24 Oct 2025 13:36:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Mi57!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s that time of year again: the viruses are swirling around. Last week I succumbed to one (most likely the one-that-must-not-be-named), stalling progress on my Substack writing.</p><p>This week, I&#8217;ve been working towards an expansion of discourse on trust in AI to better account for the rarely acknowledged&#8212;but I&#8217;d argue extremely important&#8212;dimension of self-trust with AI. It not only matters that we appropriately trust or distrust AI, but that we appropriately trust or distrust ourselves to have the competence and moral integrity to act with AI in ways consistent with our own values and interests.</p><p>I will develop this argument over a series of posts, drawing on the book <em>Self-Trust and Reproductive Autonomy</em>, by Carolyn McLeod. As the title suggests, her book focuses on self-trust in the particular context of reproductive healthcare, looking at how it supports patient autonomy, how it&#8217;s developed to a greater or lesser extent within individuals depending on external (social) conditions, and how reproductive healthcare practitioners can either undermine or promote self-trust.</p><p>What I plan to do is to elaborate a parallel argument for self-trust with AI: looking at what we need self-trust for in this context, exploring how different lived experiences which promote different degrees of self-trust come to affect one&#8217;s dis/trusting attitudes to AI, and drawing out important implications for AI practitioners regarding the design of AIs that better promote (appropriate) self-trust and autonomy. </p><p>This argument is only coherent if we understand human-technology not as an external relation &#8220;between pre-given entities that can have an impact on each other&#8221;, but rather as &#8220;mutually constitut[ing] each other&#8221;, as argued by Kiran and Verbeek in their paper <a href="https://link.springer.com/article/10.1007/s12130-010-9123-7">&#8220;Trusting Our Selves to Technology&#8221;</a>. Using Heideggarian terms, they state that technologies &#8220;involve a <em>revealing-concealing structure</em>; they constitute the relations between human beings and their world.&#8221; From their perspective, this matters because, in being involved with technologies to a greater or lesser extent as one decides suits them and their particular interests/values, individuals are involved in &#8220;a form of self-care&#8221;. Ultimately they advocate for more reflection on how one involves oneself with technology, in &#8220;taking a stance toward this involvement&#8221;.</p><p>What they are describing is consistent with McLeod&#8217;s articulation of autonomy. While they propose that a kind of <em>confidence</em> is necessary to enable this autonomy, and use the language of &#8220;trusting our selves to technology&#8221;, they do not actually go into any depth on self-trust as a concept. This is where I will pick up, adding some richness to this premise they set out in their paper by pulling from McLeod. Whereas Kiran and Verbeek do not go so far as to suggest that technology impacts the extent to which we trust ourselves, this is entirely consistent with the view they set out in their paper, and it is where I am heading. What I will show is that AI consistently undermines our confidence in our competence and our moral integrity in ways that erode our ability to &#8220;trust our selves to technology&#8221; consistent with &#8220;self-care&#8221;.</p><p>If that sounds interesting, then stay tuned! </p><p>So far the creative process looks a lot like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Mi57!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Mi57!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Mi57!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Mi57!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Mi57!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Mi57!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg" width="1000" height="758" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:758,&quot;width&quot;:1000,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;It's Always Sunny in Philadelphia\&quot; Sweet Dee Has a Heart Attack (TV Episode  2008) - IMDb&quot;,&quot;title&quot;:&quot;It's Always Sunny in Philadelphia\&quot; Sweet Dee Has a Heart Attack (TV Episode  2008) - IMDb&quot;,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="It's Always Sunny in Philadelphia&quot; Sweet Dee Has a Heart Attack (TV Episode  2008) - IMDb" title="It's Always Sunny in Philadelphia&quot; Sweet Dee Has a Heart Attack (TV Episode  2008) - IMDb" srcset="https://substackcdn.com/image/fetch/$s_!Mi57!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Mi57!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Mi57!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Mi57!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd9a5f8d5-8268-4e7d-bb71-dee9ab23f16b_1000x758.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>                                   Source: <a href="https://www.imdb.com/title/tt1290725/">https://www.imdb.com/title/tt1290725/</a></p><p>Meanwhile, I&#8217;d love to hear from you whether this notion of self-trust resonates with you, and how you see it mattering to trust in AI. Do please feel free to start a discussion in the comments.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/self-trust-with-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/self-trust-with-ai/comments"><span>Leave a comment</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h4>References</h4><p>Kiran, A.H. and Verbeek, P.P., 2010. Trusting our selves to technology. <em>Knowledge, Technology &amp; Policy</em>, <em>23</em>(3), pp.409-427.</p><p>McLeod, C., 2002. <em>Self-Trust and Reproductive Autonomy</em>. MIT Press.</p><p></p>]]></content:encoded></item><item><title><![CDATA[Hopeful trust: a post-mortem]]></title><description><![CDATA[Why hopeful trust eventually gives way to distrust]]></description><link>https://trustbranknowles.substack.com/p/hopeful-trust-a-post-mortem</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/hopeful-trust-a-post-mortem</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Fri, 10 Oct 2025 10:08:47 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week I argued that there is a kind of trust that arises within human-technology relations characterised by limited agency, namely hopeful trust. While this hopeful trust sustains the relationship in some ways, it creates an internal struggle within the individual who must repress the emotional response created by the dynamic. This is very depleting, as is any dysfunctional relationship. At some point, I argued, the capacity to repress will weaken (e.g. when one can envisage a feasible way out, or perhaps when one just can&#8217;t live with the dissonance anymore) and the bond will dissolve. Customers will flee. </p><p>As if to prove my point, today is <a href="https://www.timetorefuse.com/">Time To Refuse</a>, a day of app deletion for those who are fed up with &#8220;being controlled&#8221;, are done accepting &#8220;anything less than our full humanity&#8221;, are choosing &#8220;a life we elect, not one imposed on us&#8221;, and are focusing on "build[ing] back communities&#8221;. That this has been designed as a concerted effort says something about how difficult it can be for individuals to make this choice alone. </p><p>And notice the language here! These are interpersonal grievances. The felt harms of these systems are the wounds of interpersonal violence. Scholars will insist that AI is a tool, so interpersonal models of trust don&#8217;t apply&#8212;we either trust AI to perform certain functions or we do not, the way we might trust or distrust a hammer to drive a nail into a piece of wood. But there is no separating the social context. A hammer can be used as a weapon. Do we trust the person wielding it?</p><p>Building on the seed I planted last week, I want to offer a kind of post-mortem of hopeful trust versus&#8230; solid trust? The term &#8220;trust&#8221; used to suffice, but it has been pipiked so terribly that it feels important to create a language to distinguish trust-replacement strategies from actual trust. I like &#8220;solid trust&#8221; because it makes me think of my husband: in a world that too often feels unstable and threatening, he is &#8220;my solid place,&#8221; I like to say.</p><p>Enough schmaltz. (He reads these posts, let&#8217;s not embarrass the man.)</p><p>What I&#8217;m offering today is a way of making sense of the costs of building AI systems without attending to solid trust. Yes, this equates to lost profit. In fact, worse, it contributes to the gross overvaluation of tech companies, which are projecting very flimsy trust into the future on the assumption that creating the conditions for lock-in will prevent stock price collapse. It won&#8217;t. <a href="https://www.linkedin.com/feed/update/urn:li:activity:7380245960553312256/">The bubble will burst</a>. The economic shocks will be bad enough, but worse still is what this will do to our capacity for epistemic trust, the most basic requirement of a functional society.</p><p>This will not be a polished argument. What you&#8217;re reading reflects my thought process unfolding in real time. Feedback is very welcome!</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/hopeful-trust-a-post-mortem/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/hopeful-trust-a-post-mortem/comments"><span>Leave a comment</span></a></p><h3>A post-mortem of hopeful trust</h3><p>There appear to be 3 key ingredients that create hopeful, rather than solid, trust. I&#8217;ll explore how they contribute to the phenomenon and how they contribute to the loss of trust over a long enough timeframe.</p><p><em>[Below I am drawing on <a href="https://dl.acm.org/doi/abs/10.1145/3609329">my own publication</a>, referenced at the end of this post, and <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Sabino Marquez&quot;,&quot;id&quot;:324108415,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/85fee408-b712-44ea-a283-02bf263b5a3e_1024x1024.png&quot;,&quot;uuid&quot;:&quot;ba6e1aae-2efe-41b0-a7fd-487094268ded&quot;}" data-component-name="MentionToDOM"></span>&#8217;s post introducing <a href="https://substack.com/@trustvalue/p-173193136">The Sovereign Machine</a>. It also enfolds the Trust Envelope Model, as discussed in my <a href="https://open.substack.com/pub/trustbranknowles/p/a-systems-view-of-trustworthy-ai?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">previous post</a> and in <a href="https://substack.com/@trustvalue/p-173139335">Sabino&#8217;s</a>, originally.]</em></p><h5>Irreversibility</h5><p>When I confronted participants with evidence that the services they used violated their privacy preferences, they struggled to envisage a way to exit. They had become  <em>socially entangled</em> in these services: ceasing use of them would be detrimental to their friendships, their family connections, their ability to be scheduled to a work rota, etc. They also described feeling <em>dependent</em> on these services: they no longer felt capable of accomplishing certain tasks without the use of the tool. This is lock-in. It keeps people using systems. But do they trust them?</p><p>It depends, of course, what you mean by trust. Yes, they trust them as a tool. They have what <a href="https://link.springer.com/article/10.1007/s13347-024-00837-6">Sam Baron</a> would call &#8220;a sufficiently solid inductive base&#8221; for trusting these tools will work as they expect them to work. They are demonstrably reliable, hence the lock-in!</p><p>But do they trust it in a way that promotes a sense of calm, of comfort? No. There is that niggling awareness that this feels icky. How would it feel if every time you needed help from a work colleague they would only help you if they were allowed to read a page of your diary? You might decide this was the only way to survive in your company; but you would feel rightly awful about your colleague and about your place of employment.</p><p>Reliance in the absence of agency does not promote trust. It promotes what psychologists might call &#8220;enmeshment&#8221;. The user&#8217;s sense of self (their values, their identity) is compromised, they live in a state of chronic low-level stress, and they develop reactive emotions like anxiety, shame, and resentment. None of this feels like trust. When these accumulate over a sufficiently long period, one eventually reaches a decision point: the calculus about whether it is more painful to stay or to go starts to change, and going starts to feel like the less costly option.</p><h5>Opacity</h5><p>Another aspect of hopeful trust is that it is built on&#8230; well, hope. But underneath this hope is a series of untestable beliefs about how these systems (and the wider regulatory ecosystem) works. </p><p>Okay, so when they signed up to a service, the user presumably &#8220;agreed&#8221; to a privacy policy. Almost certainly they did not read this privacy policy. Absolutely without doubt they did not understand the implications of the privacy policy <em>to their privacy</em>.</p><p>What my interviews showed was that, even when made to think through the specifics of a privacy policy, people really couldn&#8217;t come up with a clear narrative about how any of the terms in the policy would affect them. This is <em>transparency as gaslighting</em>: services make it seem like they are being forthcoming but are hiding the specifics of how they are accountable to users. </p><p>Beliefs are structurally unsound building materials. Because they are not rooted in evidence, they tend to accumulate in contradictory ways, with missing walls and gaps at the joints. In psychological terms, rather than architectural ones, this creates cognitive dissonance. When beliefs are challenged, people engage defence mechanisms to protect themselves from the uncomfortable feelings this provokes. They may justify the contradiction by creating ever more fanciful narratives about why they are, in fact, safe (despite appearances). But this process is adding false belief upon false belief. And beliefs can be upended when a person encounters solid proof to the contrary. After all, the best way to mitigate cognitive dissonance is to change beliefs.</p><p>The thing about trust is that when it is built on something real (when it is solid), it only gets stronger over time (more solid). Beliefs are tested, the structure holds; and its holding creates a more pervasive trust in the beliefs. When it is built on hope, the structure becomes increasingly unstable over time, as it requires the continual addition of false beliefs as patch repair. This only increases the attack surface for the truth.</p><h5>Friction on the vulnerable party</h5><p>In terms of privacy preservation, specifically, at the heart of my argument is a critique of consent regimes. Consent (as it is framed in GDPR) presupposes that freedom means letting people make decisions in their best interest about how much they care about privacy; and that this will allow market forces to act as an invisible hand guiding technology towards increasing alignment with the ideal amount of privacy. </p><p>This is absolute garbage.</p><p>We don&#8217;t need my interviews to know that people aren&#8217;t making a free choice when they click to consent to whatever the technology demands. But that&#8217;s not even the crux of it. My issue is that consent puts the onus on the vulnerable party to protect themselves (and this applies beyond privacy, of course). If they mess up, if they fail to understand what they were consenting to, they are to blame. What?! By definition, being trustworthy means protecting the vulnerable. Yes, you&#8217;d hope that people are placing their trust sensibly. But let&#8217;s not blame the victim. The aggressor is always in the wrong. Period.</p><p>If we zoom out a bit, the issue here is that the friction within the system is placed where it should not be. Placing friction with the user is a recipe for dignity erosion. The person least capable of making the right decision is burdened with decision making. (Often, too, the person with the least power, least information, and least expertise is tasked with mounting a legal objection to a privacy violation when they are harmed.) The most insidious effect of this is that when people are coerced into consenting, they feel complicit in their own devaluation. This is the moral injury we must live with when surrounded by untrustworthy technologies. Living with moral injury is destabilising, and deeply damaging to one&#8217;s ability to trust at all. It leads to diffuse distrust of not just the untrustworthy technology, say a given AI, but of the entire class of technologies we might call AI.</p><h3>Formulating solid trust</h3><p>If I summarise the argument so far the alternative will become obvious.</p><ul><li><p>Irreversibility &gt; undermining agency &gt; enmeshment stress &gt; looking for a way to extricate</p></li><li><p>Opacity &gt; undermining accountability &gt; assemblage of false beliefs &gt; increasingly easy to challenge and destroy trust</p></li><li><p>Friction on the vulnerable &gt; undermining dignity &gt; moral injury &gt; diminished capacity for trust in the digital world</p></li></ul><p>As a reminder (drawing on the Trust Envelope Model), agency, accountability, and dignity are the load bearing struts in a system that promotes trust. Weakening them leads to system dynamics that perpetually diminish trust.</p><p>Building solid trust, then, involves the following.</p><p><strong>Reversibility.</strong> Make it easy for people to exit. This is a trust power move because it makes the company delivering the service mutually vulnerable. It shows to users that the company is highly incentivised to be trustworthy because they have to continue to earn trust to earn money. When people stay with a company for reasons to do with trust, every time they use it validates and grows their trust.</p><ul><li><p>Reversibility &gt; agency &gt; choosing to continue using &gt; contentment with decision</p></li></ul><p><strong>Inspectability.</strong> Make it really clear to people what claims you are making about why they ought to trust you. Don&#8217;t make them guess; don&#8217;t put them in the position of having to translate legalese into trust stories that won&#8217;t withstand scrutiny. Then back these claims up with verifiable evidence. In some cases, that might mean exposing when you&#8217;ve gotten it wrong and what you have done to repair the issue. People can thus form trust that is based in evidence, not in hope. They develop confidence in their use of the service, and in their ability to make good decisions.</p><ul><li><p>Inspectability &gt; accountability &gt; assembling proofs &gt; self-assurance </p></li></ul><p><strong>Re-allocating friction.</strong> The company delivering the service bears the weight of responsibility for ensuring user&#8217;s safety. The system is designed to protect users from harm as the bedrock operating principle. This also means that it is not the responsibility of the user to be vigilant of any harms; continually testing and scanning for harms is internalised as an operating cost. So, too, is redress. If a user does come with a grievance, the company commits to doing whatever work is entailed in address it. This acts as a form of insurance: anything that goes wrong is a taken as a cost to the company and repaired, rather than being absorbed as harm to the user. This materially reduces the risk to the user, and they can make decisions that are by default dignity preserving.</p><ul><li><p>Re-allocating friction &gt; dignity &gt; fiduciary duty to vulnerable &gt; felt sense of safety</p></li></ul><h3>Conclusion</h3><p>The meta point I have been trying to make is that too often we mistake behaviour as evidence of trusting. But clearly people can use technologies while distrusting them. Hopeful trust, which exists as a state of suspended reality between trust and distrust (trust formation in some sort of holding pattern), functions to sustain technology use; but there is no mechanism for converting hopeful trust into trust. It will eventually flip into distrust under the continued stress created by irreversibility, opacity, and friction (the plane must land eventually, people). The only way to take people from a place of hopeful trust is to replace coercion with freedom, replace beliefs with evidence, and for the company to internalise the work of protecting users.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3500" height="2333" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2333,&quot;width&quot;:3500,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;woman holding does anything even matter anymore? signage near building at daytime&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="woman holding does anything even matter anymore? signage near building at daytime" title="woman holding does anything even matter anymore? signage near building at daytime" srcset="https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1521984692647-a41fed613ec7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0MXx8cHJvdGVzdHxlbnwwfHx8fDE3NjAwOTA4Mjd8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@heathermount">Heather Mount</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>Further reading</h3><p>Knowles, B. and Conchie, S., 2023. Un-paradoxing privacy: Considering hopeful trust. <em>ACM Transactions on Computer-Human Interaction</em>, <em>30</em>(6), pp.1-24. <a href="https://dl.acm.org/doi/abs/10.1145/3609329">https://dl.acm.org/doi/abs/10.1145/3609329</a></p><p>Sabino Marquez, 2025. The Sovereign Machine &#8212; Trust Value Management in the Age of AI. Trustable.tv. <a href="https://substack.com/@trustvalue/p-173193136">https://substack.com/@trustvalue/p-173193136</a></p><div class="embedded-post-wrap" data-attrs="{&quot;id&quot;:172879827,&quot;url&quot;:&quot;https://www.trustclub.tv/p/the-sovereign-machine&quot;,&quot;publication_id&quot;:4318455,&quot;publication_name&quot;:&quot;Trust Club: Home of Trust Value Management&quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!KG6p!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1557b8de-b9c4-4b83-b5c6-cd8ccd1ec937_1024x1024.png&quot;,&quot;title&quot;:&quot;The Sovereign Machine White Paper &amp; Crosswalk&quot;,&quot;truncated_body_text&quot;:&quot;Series Introduction - The Sovereign Machine&quot;,&quot;date&quot;:&quot;2025-09-09T19:00:29.359Z&quot;,&quot;like_count&quot;:0,&quot;comment_count&quot;:0,&quot;bylines&quot;:[{&quot;id&quot;:324108415,&quot;name&quot;:&quot;Sabino Marquez&quot;,&quot;handle&quot;:&quot;trustvalue&quot;,&quot;previous_name&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/85fee408-b712-44ea-a283-02bf263b5a3e_1024x1024.png&quot;,&quot;bio&quot;:&quot;Sabino Marquez, creator of Trust Value Management and The Trust Product, has redefined trust as a strategic discipline. His strategies have driven hundreds of millions in value, making trust the foundation for sustainable growth in an uncertain world&quot;,&quot;profile_set_up_at&quot;:&quot;2025-03-07T19:54:54.651Z&quot;,&quot;reader_installed_at&quot;:&quot;2025-07-16T16:56:48.614Z&quot;,&quot;publicationUsers&quot;:[{&quot;id&quot;:4405060,&quot;user_id&quot;:324108415,&quot;publication_id&quot;:4318455,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:true,&quot;publication&quot;:{&quot;id&quot;:4318455,&quot;name&quot;:&quot;Trust Club: Home of Trust Value Management&quot;,&quot;subdomain&quot;:&quot;trustclub&quot;,&quot;custom_domain&quot;:&quot;www.trustclub.tv&quot;,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Where Trust Value Leaders Meet, Learn, and Share.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1557b8de-b9c4-4b83-b5c6-cd8ccd1ec937_1024x1024.png&quot;,&quot;author_id&quot;:324108415,&quot;primary_user_id&quot;:324108415,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-03-07T20:11:45.276Z&quot;,&quot;email_from_name&quot;:&quot;Sabino from Trust Club&quot;,&quot;copyright&quot;:&quot;Sabino Marquez&quot;,&quot;founding_plan_name&quot;:&quot;Founding Member&quot;,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;enabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;magaziney&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:4405041,&quot;user_id&quot;:324108415,&quot;publication_id&quot;:4318437,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:4318437,&quot;name&quot;:&quot;Trust Value Management&quot;,&quot;subdomain&quot;:&quot;trustvalue&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;Trust Value Management is the strategic framework for transforming trust value into a monetizable asset, equipping businesses with the models, frameworks, and execution strategies to manufacture, capitalize, and defend trust value as a market advantage.&quot;,&quot;logo_url&quot;:null,&quot;author_id&quot;:324108415,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-03-07T20:09:41.602Z&quot;,&quot;email_from_name&quot;:&quot;Trust Value Management&quot;,&quot;copyright&quot;:&quot;Sabino Marquez&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}},{&quot;id&quot;:4405102,&quot;user_id&quot;:324108415,&quot;publication_id&quot;:4318494,&quot;role&quot;:&quot;admin&quot;,&quot;public&quot;:true,&quot;is_primary&quot;:false,&quot;publication&quot;:{&quot;id&quot;:4318494,&quot;name&quot;:&quot;The Trust Product&quot;,&quot;subdomain&quot;:&quot;thetrustproduct&quot;,&quot;custom_domain&quot;:null,&quot;custom_domain_optional&quot;:false,&quot;hero_text&quot;:&quot;The Trust Product is a business system that pivots internally facing service functions into externally facing product organizations, delivering trustworthiness as a visible, measurable market asset that drives value.&quot;,&quot;logo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/85fee408-b712-44ea-a283-02bf263b5a3e_1024x1024.png&quot;,&quot;author_id&quot;:324108415,&quot;primary_user_id&quot;:null,&quot;theme_var_background_pop&quot;:&quot;#FF6719&quot;,&quot;created_at&quot;:&quot;2025-03-07T20:16:07.894Z&quot;,&quot;email_from_name&quot;:null,&quot;copyright&quot;:&quot;Sabino Marquez&quot;,&quot;founding_plan_name&quot;:null,&quot;community_enabled&quot;:true,&quot;invite_only&quot;:false,&quot;payments_state&quot;:&quot;disabled&quot;,&quot;language&quot;:null,&quot;explicit&quot;:false,&quot;homepage_type&quot;:&quot;newspaper&quot;,&quot;is_personal_mode&quot;:false}}],&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null,&quot;status&quot;:{&quot;bestsellerTier&quot;:null,&quot;subscriberTier&quot;:null,&quot;leaderboard&quot;:null,&quot;vip&quot;:false,&quot;badge&quot;:null,&quot;paidPublicationIds&quot;:[]}}],&quot;utm_campaign&quot;:null,&quot;belowTheFold&quot;:true,&quot;type&quot;:&quot;newsletter&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="EmbeddedPostToDOM"><a class="embedded-post" native="true" href="https://www.trustclub.tv/p/the-sovereign-machine?utm_source=substack&amp;utm_campaign=post_embed&amp;utm_medium=web"><div class="embedded-post-header"><img class="embedded-post-publication-logo" src="https://substackcdn.com/image/fetch/$s_!KG6p!,w_56,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1557b8de-b9c4-4b83-b5c6-cd8ccd1ec937_1024x1024.png" loading="lazy"><span class="embedded-post-publication-name">Trust Club: Home of Trust Value Management</span></div><div class="embedded-post-title-wrapper"><div class="embedded-post-title">The Sovereign Machine White Paper &amp; Crosswalk</div></div><div class="embedded-post-body">Series Introduction - The Sovereign Machine&#8230;</div><div class="embedded-post-cta-wrapper"><span class="embedded-post-cta">Read more</span></div><div class="embedded-post-meta">7 months ago &#183; Sabino Marquez</div></a></div>]]></content:encoded></item><item><title><![CDATA[Hopeful trust]]></title><description><![CDATA[Making sense of when trustworthiness doesn't seem to matter]]></description><link>https://trustbranknowles.substack.com/p/hopeful-trust</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/hopeful-trust</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Wed, 01 Oct 2025 10:17:14 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my last post, I shifted from talking about trustworthiness as a matter of moral character to a practice that yields competitive advantage for companies. But if this is so, what&#8217;s going on with these billion- and trillion-dollar tech companies that are thriving despite being ostentatiously untrustworthy?</p><p>It&#8217;s an important question. After all, if people are willing to use untrustworthy services, then what (in the absence of an effective legal-regulatory system with real authority) would compel tech companies to be trustworthy?</p><p>We might assume that if people appear to compartmentalise their trust, to put such matters aside when they want or need to use a service, that trust is largely irrelevant to customer behaviour. But I think it&#8217;s more complicated than this&#8230;</p><h3>Practices, practical concerns, and motivations</h3><p>Scholars (economists, in particular) fantasise that when a person is deciding to use a technology, they engage rational machinery that helps them figure out how to maximise their self-interest. If they care about privacy, for example, then this will have a certain weight in the calculation, and one needs to decide how much that is worth in some sort of trade against the value of the tool they are considering using (this is known as the privacy calculus model).</p><p>But in reality, we encounter technologies amidst the mundane chaos of the routine practices that constitute our lives. At the point we face a decision to adopt a tool, it is probably because an immediate practical concern brought us here and the decision has largely been made for us: our boss told us to download something, our kid joined a new club that requires payment through a particular platform, etc.</p><p>Often trust is backgrounded from the decision process, not because it doesn&#8217;t matter, but rather because it&#8217;s <em>required</em> as a premise of being able to use the technology at all. In some respects, when we use a tool without deliberating on trust, we are acting <em>as if</em> we trust (see Michael K MacKenzie&#8217;s <a href="https://www.tandfonline.com/doi/full/10.1080/13698230.2024.2423141">&#8220;As-if trust&#8221;</a>). And having acted <em>as if</em> we trusted sets us up for later interpreting our actions as indicative of us having <em>actually </em>trusted&#8230; but did we? Do we?</p><h3>Accounting for and reconciling behaviour</h3><p>Being able to move about in the world with a sense of safety requires that we trust ourselves to make good decisions&#8212;that <em>we</em> are trustworthy in this way. This drives several discrepancy reduction strategies. </p><p>The first is that when people are asked about why they decided to use a tool, or asked whether they trust a technology they are already using, they are likely to over-emphasise trust as a way of morally accounting for their action. So we need to take self-reported trust with a grain of salt: it doesn&#8217;t necessarily accurately reflect trust at the point of making a decision to use a tool. It can be a post hoc rationalisation, a means of compensating for one&#8217;s own apparent untrustworthiness, or more specifically, of making oneself <em>appear digitally competent to others</em>, when in fact the question of trustworthiness never entered their mind. So while there is plenty of empirical research that finds a link between self-reported trust in services and use of those services, and this is taken to mean there must be some trust calculus at work, it may actually indicate the reverse. It may be, instead, that using a service increases <em>reported trust</em> of that service, for reasons that have nothing to do with the technology (or company) and everything to do with our insecurities about our ability to successfully navigate the perils of this digital world.</p><p>The second effect is what a woman I once interviewed described as &#8220;planned ignoring&#8221;&#8212;that is, ignoring evidence of untrustworthiness that provokes anxiety. People will rationalise that worrying about how untrustworthy these services are is bad for their mental health, so they choose to just not think about it. But this is different than <em>not seeing</em> the untrustworthiness; it&#8217;s choosing not to feel it. As this participant told me, </p><blockquote><p>&#8216;. . . it&#8217;s a bit like my eyes are open, but in my ears I&#8217;m going doo doo doo doo doo doo doo like that with my fingers in my ears. . . cognitively, I know; emotionally it presses a few buttons; but this is life, and I&#8217;m old enough to know that success in life is dealing with the crap that life throws at you and surviving it and being adaptable. . . it&#8217;s like lots of unpleasant things in life. You just, can just put them on the back burner, like a parrot on your shoulder. You know they&#8217;re there, occasionally things will happen and it squawks in your ear. But most of the time you&#8217;re just aware and it&#8217;s resting. That&#8217;s the way I look at it.&#8217;</p></blockquote><p>The third effect is even more interesting. When confronted with evidence of untrustworthiness of services that they use, people will attempt to argue that they have <em>sensible reasons</em> for trusting the company. When discontinuation of the technology is infeasible, users don&#8217;t just discount or ignore evidence of untrustworthiness, they confabulate evidence that would give them reason to trust. The most common example of this was when people argued that if the service was <em>really</em> untrustworthy, then everyone would stop using it&#8230; so it must be fine&#8230; Even though they had just told me that they knew a given service was untrustworthy but they couldn&#8217;t stop using it, they liked to imagine that others had more choice than they did. But most interesting of all, they demonstrated self-awareness: they knew this was imagining. This was <em>hoping</em> things were different than they knew they were.</p><h3>Hopeful trust</h3><p><a href="https://www.tandfonline.com/doi/abs/10.1080/00048400801886413">Victoria McGeer</a> writes about a kind of trust she calls <em>substantial trust</em> that &#8220;renounces the very process of weighing whatever evidence there is in a cool, disengaged, and purportedly objective way.&#8221; The example she uses is a person who trusts a friend even in the face of damning evidence they have committed a crime. They feel (seemingly unreasonably) <em>hopeful</em> about the person in question. This hope enters when individuals face a limit to their own &#8220;agential powers&#8221;. When there is little to do to affect the outcome, hope allows a person to &#8220;rid[e] out&#8221; worries and self-doubt. It creates what McGeer describes as &#8220;affectively charged scaffolding&#8221; for doing what one can do in the situation.</p><p>When people hope that the services they find themselves intertwined with are trustworthy (even in the face of clear evidence to the contrary), while this might seem irrational, what else is there? Hopeful trust is a pragmatic response to the reality we find ourselves in, where refusing to use every untrustworthy piece of technology would render ourselves non-functional in society.</p><p>We can take McGeer&#8217;s insight too far. Hopeful trust does not lead to absolute blindness to evidence of untrustworthiness. A person does not necessarily rest easy in their hopeful trust; it takes a lot of active suppression to sustain. One has seen, and stored the evidence somewhere, even if they have shoved it down and refuse to look at it. There is a partial knowing that can bubble up to the surface at any point given the right trigger.</p><p>I am obsessed with true crime and have seen this with wives of serial killers whose husbands are caught and found guitly, and yet they deny, deny, deny&#8230; until one day they can&#8217;t sustain the hopeful trust any longer. The drawer that they have put that evidence in is now full. The hope dies.</p><p>So how does this help us make sense of the apparent success of untrustworthy tech companies? </p><p>It would be hugely convenient for the untrustworthy companies if we started to doubt the idea that trustworthiness matters. It&#8217;s the same narrative that&#8217;s been peddled for years about the so-called &#8220;death of privacy&#8221;&#8212;that people don&#8217;t seem to care about privacy, as their actions sure don&#8217;t align with valuing privacy. But their actions <em>can&#8217;t </em>align with their values. They&#8217;d much prefer they could protect their privacy if it wasn&#8217;t constantly extracted as a cost of being able to use digital services.</p><p>Trustworthiness is far from dead. <em>What hopeful trust shows is just how much people wish tech were more trustworthy than it is.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/hopeful-trust/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/hopeful-trust/comments"><span>Leave a comment</span></a></p><h3>The fragility of hopeful trust</h3><p>I still have not really answered the question of whether there is a competitive advantage to being trustworthy&#8230;</p><p>In speaking with people about their attitudes, I have found that they are not so deluded as to believe these tech companies are trustworthy. They are seeing, they are listening&#8230; they are just turning the dial down so they can cope with the uncomfortable feelings it provokes. </p><p>This suggests that if given the opportunity to stop using these technologies, they would do so in a heartbeat. Untrustworthy companies must always live in fear of a viable competitor.</p><p>But people also tell me about how they are using these technologies they don&#8217;t fully trust. They engage in a dance of deception, feeding it false information because they don&#8217;t trust the company not to misuse it. Over time, this bad data poisons the product. They also use it as sparingly as possible&#8212;quickly dipping in, doing whatever they have to do, then getting out. No lingering. (For me, I know that I go on Facebook to see posts from my children&#8217;s Scouts group and then get the hell outta there before I see anything else!) For businesses whose profit increases as a function of user engagement, this represents a substantial loss. </p><p>Ultimately, while untrustworthiness may not be the differentiator between the highest and lowest profit companies, it is probably the difference between those companies that last and those that eventually fade away.</p><p>It&#8217;s not entirely satisfying&#8212;we&#8217;d like there to be more justice in the world than this. And we should continue to hope for this justice. As long as we have this hope&#8212;as long as we don&#8217;t give up on the idea that it matters that tech companies are trustworthy, and that we deserve this&#8212;then we will get closer to the day untrustworthy companies reap what they have sown.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="2090" height="2613" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2613,&quot;width&quot;:2090,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;silhouette of personr&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="silhouette of personr" title="silhouette of personr" srcset="https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1515191107209-c28698631303?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxMHx8aG9wZXxlbnwwfHx8fDE3NTkyNDcxNjN8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@mbrunacr">Miguel Bruna</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h3>Further reading</h3><p>McGeer, V. (2008). Trust, hope and empowerment 1 . <em>Australasian Journal of Philosophy</em>, <em>86</em>(2), 237&#8211;254. https://doi.org/10.1080/00048400801886413</p><p>See below to learn more about the study that informed these views:</p><pre><code>Bran Knowles and Stacey Conchie. 2023. Un-Paradoxing Privacy: Considering Hopeful Trust. ACM Trans. Comput.-Hum. Interact. 30, 6, Article 87 (December 2023), 24 pages. <a href="https://dl.acm.org/doi/10.1145/3609329">https://doi.org/10.1145/3609329</a></code></pre>]]></content:encoded></item><item><title><![CDATA[A systems view of trustworthy AI]]></title><description><![CDATA[Why a virtue theory of trustworthy AI matters beyond being virtuous]]></description><link>https://trustbranknowles.substack.com/p/a-systems-view-of-trustworthy-ai</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/a-systems-view-of-trustworthy-ai</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Wed, 17 Sep 2025 14:41:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!DGoo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>[In case anyone has been tracking when I normally post and was expecting a post at the end of last week, I was frantically writing papers for the CHI conference. And because they had almost nothing to do with trust, it derailed my train of thought a bit. Happily, I&#8217;m now getting back to the juicy trust stuff!]</em></p><p>I&#8217;ve spent a number of weeks here characterising the trustworthy AI practitioner&#8217;s individual moral responsibility. I have used Potter&#8217;s book <em>How Can I Be Trusted? </em>as a foundational text for elaborating a virtue theory of trustworthy AI. Her book is so resonant. It speaks to that feeling of wanting to shake people and scream &#8220;Be better!&#8221; But more productively, it has helped pinpoint where AI practitioners tend to fall short when it comes to trustworthiness and what would need to change in order to be &#8220;fully trustworthy&#8221;.</p><p>What&#8217;s missing from this analysis is a systems view on what happens when these requirements are unfulfilled. The risk with Potter&#8217;s work is that it can be dismissed as a text for the morally pious: <em>here&#8217;s how to be the best, most ethical person, better than all those untrustworthy people. </em>Similarly, I&#8217;m aware that in identifying requirements of trustworthy AI practitioner, people may think I&#8217;m describing how one can develop AI and still be a decent person. (Tips for how to sleep better at night, in other words.)</p><p>Sure, be a good person. But what I hope to make clear with this post is that trustworthiness isn&#8217;t about feeling good about yourself; and it isn&#8217;t a &#8216;nice to have&#8217;. Trustworthiness is essential for the endurance of this technology and of the companies that develop them.</p><h3>The Trust Envelope Model</h3><p>In his <a href="https://substack.com/@trustvalue/p-173139335">recent post</a>, <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Sabino Marquez&quot;,&quot;id&quot;:324108415,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/85fee408-b712-44ea-a283-02bf263b5a3e_1024x1024.png&quot;,&quot;uuid&quot;:&quot;fd2e1206-17b9-4cb2-ab73-43d52aa0be32&quot;}" data-component-name="MentionToDOM"></span> presents his staggeringly eloquent Trust Envelope Model (TEM) to explain system thriving / collapse at all scales. The basic premise is that humans need certain structural elements in place to thrive as a collective. This goes beyond trite sayings like, &#8220;trust is the glue of society&#8221; (it is&#8230; but what do we do with this?). The model shows how trust is supported through dignity, agency, and accountability, making possible cooperation and adaptability&#8212;these being the fruit of trust that sustains a society.</p><p>System collapse&#8212;from the small scale, e.g. the business that tanks, to the large scale, e.g. the empire that falls&#8212;can all be diagnosed in terms of failure points in the model.</p><ul><li><p><strong>Dignity.</strong> Systems that neglect dignity are characterised by exploitation, with individuals locked in a zero-sum game. While this benefits some in the short term, the long-term opportunity cost is the mutual and continually self-reinforcing gain enabled through cooperation. Lack of dignity fractures society into competing groups; abundance of dignity unites people in shared purpose and allows for pooling of resources, which accrue in a positive-sum game.</p></li><li><p><strong>Agency.</strong> When people have resources (skills, ideas, wisdom, etc), they need agency to put them into action. Agency drives adaptability through empowering inspiration and drives cooperation through empowering people to work to create meaningful change. Depriving agency, on the other hand, stifles creativity (since good ideas can&#8217;t be implemented anyway), turns people into automatons (since there&#8217;s nothing that can be done except going with the flow), and promotes groupthink (rather than cooperative friction).</p></li><li><p><strong>Accountability.</strong> Systems need feedback to know whether the decisions that have been made have promoted positive or negative change. When accountability is in place it enables this connection, driving insight and giving systems direction. In this role, accountability also stabilises these other nodes so they don&#8217;t lapse into their shadow versions. In Sabino&#8217;s words, &#8220;Without accountability, dignity is hollow and agency becomes a destructive license.&#8221;</p></li></ul><p>Together, these three components promote cooperation and adaptability, which is what humans need to absorb external shocks and continually improve their conditions.</p><p>[All credit to Sabino Marquez and his team for the above.]</p><h3>Applying TEM: the example of higher education</h3><p>I&#8217;m going to spend some time on an example that I suspect (alas) is close to home for many reading this, as I think here TEM will make intuitive sense.</p><p>A while ago I read a <a href="https://www.linkedin.com/posts/activity-7360700604542873603-l2Na?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAAEeHoR4BVEFWoSpXoMNdJ8WaQPphHjRamYg">LinkedIn post</a> that offered a refreshing take on what is happening within UK higher education and how to turn a corner on its collapse. <a href="https://www.linkedin.com/in/monicafranco/overlay/about-this-profile/">Monica Franco-Santos</a> argues (though I am slightly changing her language) that we are locked into a competitive-exploitative framework that sees the only solution to the current financial crisis as greater efficiency. If there is a drop in demand, with fewer students willing to pay tuition, then universities need to find a way of lowering the cost of production. This means cutting as much staff as these universities think they can get away with and refocusing the remaining academics on delivery of the core product (narrowly construed as teaching). Activities like &#8220;thinking&#8221;, which drew people to the profession in the first place, are treated as inefficiencies. This translates to a substantial increase in workload for staff without any increase in salary (and with less of the good stuff that keeps people happy), i.e. ramping up the exploitation. Franco-Santos writes:</p><blockquote><p>&#8220;These actions can feel logical, and even necessary, but research on complex systems and mission-oriented organisations shows they can unintentionally erode the very networks, expertise, and diversity of thought that underpin value and resilience.&#8221;</p></blockquote><p>What she is diagnosing is a system with multiple points of failure leading to diminished trust and its payoff, cooperation-adaptability. </p><p>When staff is treated without dignity, the employer-employee relationship becomes transactional, antagonistic, uncooperative. Unremunerated activities are dropped from the to do list. Often these are the very things that promote trust, thus killing the engine that drives growth.</p><p>Meanwhile, rising workloads limit agency, with staff unable to do the things that make the institution flourish. They don&#8217;t have time to adequately support students or peers, leading to worse student experience (further reducing demand) and staff churn (undermining capability to deliver the product). Nor can staff pursue grants, or engage in other activities that build the reputation of the institution. No one can do what is meaningful because it is not measured. The value of this higher education product plummets. </p><p>Further, there is no accountability for the ways trust is being eroded and why value is in freefall. Vice Chancellors move from institution to institution before the failure of their policies can catch up with them. The feedback is too slow to matter to those making decisions.</p><p>Franco-Santos proposes a new underlying metaphor, one that emphasises flourishing, as a way out of this death spiral:</p><blockquote><p>&#8220;What if we see the UK HE sector not as a market of competing factories but as a knowledge ecosystem, with each university as a living garden?&#8221;</p></blockquote><p>While she proposes understanding the financial crisis as a &#8220;drought&#8221;, I think this puts too much emphasis on environmental factors and too little on the self-inflicted wound that undermines adaptability. Her point is still incisive, however: that when we recognise that human flourishing is a positive-sum game, we instrument solutions that foster trust. She provides some inspiration for the kinds of things you&#8217;d be doing differently if you adopted this perspective (I&#8217;m quoting but changing the order to align with TEM):</p><p>&#8220;&#128073;Tend the roots: Protect staff, students and their wellbeing; keep academic voices central in decisions (they have the local knowledge)&#8230; &#128073;Invite co-gardeners: Seriously engage students and industry leaders as co-creators of programmes, boosting retention and support&#8221;</p><ul><li><p>i.e. treat all stakeholders with <strong>dignity; </strong>and give people <strong>agency</strong> to <strong>cooperate</strong> in translating their expertise into system <strong>adaptability</strong></p></li></ul><p>&#8220;&#128073;Remove weeds: Reduce bureaucracy so resources are used where they matter most&#8221;</p><ul><li><p>i.e. pull back investment in anything that hinders <strong>agency </strong>and <strong>cooperation</strong></p></li></ul><p>&#8220;&#128073;Encourage cross-pollination: Foster collaboration with teams (focused on strengths) and partnerships&#8230; &#128073;Improve irrigation: Centralise technical systems (tech, data), NOT people to avoid uprooting local networks&#8221;</p><ul><li><p>i.e. assign value to and appropriately reward <strong>cooperation</strong>, and invest in cooperation infrastructure</p></li></ul><p>The only aspect missing from this is <strong>accountability</strong>; I propose something along the lines of:</p><p>&#128073;Monitor yields: Measure improvements in cooperation and adaptability, and make explicit links between decisions and consequences</p><h3></h3><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/a-systems-view-of-trustworthy-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/a-systems-view-of-trustworthy-ai/comments"><span>Leave a comment</span></a></p><h3><br>Trustworthiness as AI practice</h3><p>Now to get to the meat of this post&#8230; </p><p>My aim is to re-examine the ways AI practitioners tend to fall short of being fully trustworthy as indicative of a system that is sacrificing short term gain for the future benefits that come from structurally embedding trustworthiness into practice.</p><h5>Potter&#8217;s argument in TEM terms</h5><p>I&#8217;m going to start by translating Potter&#8217;s argument into the TEM model. According to her, trustworthiness is&#8230;</p><ul><li><p><em><a href="https://open.substack.com/pub/trustbranknowles/p/re-politicising-trustworthy-ai?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">Really seeing people in their full humanity</a>:</em> Listening to their pain, feeling it emotionally, and being moved by one&#8217;s own humanity / empathy to care for them (&#8220;to see with the whole heart&#8221;). This is treating people with <strong>dignity</strong>: understanding that they have worth and that their feelings and perspectives matter. Dignity is the starting point for Potter. Trustworthiness is non-domination and non-exploitation by definition, in that it means taking care of a person&#8217;s vulnerability and not misusing discretionary power. It&#8217;s why so many of her trustworthiness requirements are about cultivating genuine caring for people: striving to understand from another&#8217;s perspective how you might be untrustworthy (this is the &#8220;epistemic effort&#8221; demanded of trustworthy people), and taking special care with the trust of those who have not been treated well by others. The former can be understood as <em>dignity affirmation</em>; the latter is the work of <em>dignity repair</em>.</p></li><li><p><em><a href="https://open.substack.com/pub/trustbranknowles/p/4-kinds-of-silencing?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">Giving uptake to claims of injustice</a>:</em> Enabling people to speak and, in doing so, effect change. Ultimately, this is about promoting <strong>agency</strong>. The trustworthy person is open to questioning their assumptions, to being corrected, to changing course, to making repairs. Agency involves the trustor in the work of defining what trustworthiness looks like, promoting resilience by preventing against the accumulation of grievances that get in the way of future collaboration.</p></li><li><p><em><a href="https://open.substack.com/pub/trustbranknowles/p/sorry-not-sorry?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">Taking responsibility for earning and sustaining trust.</a> </em>Even the most virtuous person who is fully committed to being trustworthy will sometimes harm others. In such instances, trust is stabilised through <strong>accountability</strong>: by seeking to understand the consequences of one&#8217;s actions and sincerely apologising. This includes doing the work of identifying the moral failing, figuring out exactly what needs to change, implementing the learning, and feeding that back to the injured party. For Potter, though, accountability runs even deeper. It means accepting that it is your responsibility to earn trust, not the responsibility of others to give trust. For this reason, she emphasises the importance of giving signs and assurances of trustworthiness, most especially to those who have good reason to be wary of trusting others.</p></li></ul><h5>How AI practitioners can be trustworthy</h5><p>Given that I built up from Potter, it should be no surprise that requirements of practicing trustworthy AI are roughly the same. But it&#8217;s still worth showing what these recommendations look like when they&#8217;re repackaged according to TEM.</p><ul><li><p><strong>Dignity</strong>. I have tried to emphasise the subjective nature of trustworthiness, how what&#8217;s trustworthy AI for some is untrustworthy AI for others. When we get fixated on metrics and principles, we forget that these represent a particular viewpoint; and adopting that viewpoint as the only legitimate viewpoint is not dignity affirming, as it denies the feelings and experiences of others and allows for harm to occur. Treating people with dignity is the moral guardrail to prevent committing harm. One cannot be a trustworthy AI practitioner, in other words, if one does not genuinely care about the effect the AI has on people (all people). The kinds of things I have said we need to get better at, therefore, involve self-reflection and perspective taking: thinking critically about how the AI reflects one&#8217;s own view of the world, and how other people may experience it as harmful, threatening, or in other ways untrustworthy. </p></li><li><p><strong>Agency</strong>. I have also explored how Trustworthy AI norms become &#8220;mechanisms of control&#8221; (Potter&#8217;s phrase, p. xiii) in the absence of agency. The remedy is making room for contestation of values and practices, giving all people a voice in criticising or expanding the norms of Trustworthy AI. In practice, this not only means listening to claims of untrustworthiness, i.e. being responsive to distrust as a negotiation signal, but also deeply reflecting on whose claims are not being listened to and why, and enabling disaffected communities to use their voice to shape understandings of what makes AI trustworthy. Ultimately, agency enables the creative, cooperative work that spurs product evolution towards greater trustworthiness and engenders trust.</p></li><li><p><strong>Accountability</strong>. At the beginning of this series I argued that AI practitioners need to routinely ask themselves &#8220;Why don&#8217;t some people trust my AI?&#8221; To ask this question sincerely is to accept responsibility for earning trust. I&#8217;ve emphasised the importance of <em>exhibiting</em> of trustworthiness: not just avoiding causing harm and making reparations, but providing evidence of trustworthiness. (I have intentionally avoided the phrase &#8220;audit trail&#8221;, because this isn&#8217;t evidence for the auditors, this is evidence for the trustor, and needs to be designed to meet their trust needs.) This means representing values and beliefs explicitly in the interface to account for one&#8217;s own understanding of how the AI is trustworthy, but specifically (and this is the crucial bit, requiring both dignity and agency) as an invitation for others to challenge this account, to point out how what you have implemented leads to untrustworthy outcomes, and to recommend ways of course-correcting to earn trust and grow a customer base. When this is not a cooperative activity, the feedback mechanism becomes an echo chamber, and the divide between Trustworthy AI and Public Trust grows, leading to entrenched public resistance and customer churn.</p></li></ul><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/a-systems-view-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/a-systems-view-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/a-systems-view-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h3>Conclusion</h3><p>To bring this home, it is important to reiterate that dignity, agency, and accountability are not just lovely things that we should care about because they&#8217;re lovely. Together, they are what makes systems last, and attending to them <em>through being trustworthy in these ways</em> is what will make the AI company you are working for thrive. (Managers, CEOs!&#8212;listen up!) As Sabino explains:</p><blockquote><p>Enterprises are civilizations in miniature. They rise, they compete, they adapt, they collapse. They too depend on cooperation and adaptability stabilized by dignity, agency, and accountability. When they honor these factors, they become trusted organizations that endure. When they neglect them, they decay into brittle systems that erode value and fail stakeholders.</p></blockquote><p>I fear we have gotten a bit sidetracked in the Trustworthy AI world with the notion that Fairness, Accountability, and Transparency are a magic formula for promoting trust (see the FAccT conference). Companies attend to these because they are auditable&#8212;people have developed metrics and best practice aligned to these principles and auditors will score these aspects. But are these really the most important ones to focus on if the goal is to promote trust and grow the company&#8217;s customer base?</p><p>I&#8217;ll stand by Accountability (so long as the way we interpret this is allowed to change), but I think nothing is lost of Fairness or Transparency, and much is gained, by instead adopting Dignity and Agency and linking these three to the Cooperation-Adaptability engine. This model makes ethics urgent in a way that hasn&#8217;t been fully appreciated in the sector. It shows that cultivating the virtue of trustworthiness matters&#8212;not as a feel-good thing, but as a business model.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DGoo!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DGoo!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 424w, https://substackcdn.com/image/fetch/$s_!DGoo!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 848w, https://substackcdn.com/image/fetch/$s_!DGoo!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 1272w, https://substackcdn.com/image/fetch/$s_!DGoo!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DGoo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png" width="1456" height="1556" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1556,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7106078,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://trustbranknowles.substack.com/i/170169742?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DGoo!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 424w, https://substackcdn.com/image/fetch/$s_!DGoo!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 848w, https://substackcdn.com/image/fetch/$s_!DGoo!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 1272w, https://substackcdn.com/image/fetch/$s_!DGoo!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F37b1ada4-eaf1-4bef-85d9-b3cfb8e057f7_2316x2475.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This post is my way of rounding off my deep dive on Potter. Onto the next idea!</p>]]></content:encoded></item><item><title><![CDATA["Sorry, not sorry"]]></title><description><![CDATA[Rupture and repair with chatbots]]></description><link>https://trustbranknowles.substack.com/p/sorry-not-sorry</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/sorry-not-sorry</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Sat, 06 Sep 2025 17:07:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!XqzH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This week I&#8217;m indulging in a slight detour from the deep dive I&#8217;ve been doing on Nancy Nyquist Potter&#8217;s book, <em>How Can I Be Trusted.</em> These thoughts were prompted by the book, as she discusses at great length what it means to be trustworthy within interpersonal relationships (friendships, intimate relationships), but I&#8217;d been struggling to incorporate those insights in this series as they didn&#8217;t translate especially well to the work I was doing in elaborating the habits and sensibilities of trustworthy AI practitioners.</p><p>It has, however, renewed my interest in matters of trustworthiness as it comes to chatbots&#8212;these strange entities designed to impersonate conversational partners but increasingly standing in for friends, romantic partners, therapists, etc.</p><p>The phenomenon I&#8217;m particularly interested in is attempted trust repair when a chatbot gets something wrong. We know from Potter and others the importance of repair for maintaining trust. People make mistakes, but they can nevertheless preserve the trust they&#8217;ve built. Despite the popularity of the adage &#8220;trust takes years to build and seconds to destroy&#8221;, those whose trust plummets catastrophically have likely neglected trustworthiness at the foundation level and have been accumulating trust debt for some time.</p><p>So here I want to explore what it looks like to repair trust, and then look at how chatbots are emulating certain affectations of contrition without doing the necessary repair work.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!XqzH!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!XqzH!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XqzH!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XqzH!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XqzH!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!XqzH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg" width="800" height="1000" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1000,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A comic on the bridge from Star Trek the Next Generation.\nPicard: COMMANDER DATA, PLEASE IDENTIFY THAT ROMULAN VESSEL.\nData: THAT'S A GREAT IDEA CAPTAIN!\nIDENTIFYING A VESSEL IS A GREAT PLACE TO START - IN ANY TACTICAL OR STRATEGIC OUTER SPACE SITUATION.\nTHIS VESSEL APPEARS TO BE A 23rd CENTURY KLINGON BIRD OF\nPREY! &#128640;&#129413;&#10024;\nPicard: ARE YOU SURE?\nLIKE I SAID WE'RE... PRETTY SURE IT'S ROMULAN.\nData: ...\nData: OF COURSE! SO SORRY ABOUT THAT, YOU'RE RIGHT!\nON CLOSER EXAMINATION IT'S A ROMULAN VESSEL! CAN I RECOMMEND SOME SOONG&#8482; BRAND PRODUCTS THAT CAN HELP YOU WITH THAT?\nPicard cradles his face in his hand in a gesture of frustration.\nData: DID I MENTION THE PLIGHT OF OPRESSED WHITES IN SOUTH AFRICA?&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A comic on the bridge from Star Trek the Next Generation.
Picard: COMMANDER DATA, PLEASE IDENTIFY THAT ROMULAN VESSEL.
Data: THAT'S A GREAT IDEA CAPTAIN!
IDENTIFYING A VESSEL IS A GREAT PLACE TO START - IN ANY TACTICAL OR STRATEGIC OUTER SPACE SITUATION.
THIS VESSEL APPEARS TO BE A 23rd CENTURY KLINGON BIRD OF
PREY! &#128640;&#129413;&#10024;
Picard: ARE YOU SURE?
LIKE I SAID WE'RE... PRETTY SURE IT'S ROMULAN.
Data: ...
Data: OF COURSE! SO SORRY ABOUT THAT, YOU'RE RIGHT!
ON CLOSER EXAMINATION IT'S A ROMULAN VESSEL! CAN I RECOMMEND SOME SOONG&#8482; BRAND PRODUCTS THAT CAN HELP YOU WITH THAT?
Picard cradles his face in his hand in a gesture of frustration.
Data: DID I MENTION THE PLIGHT OF OPRESSED WHITES IN SOUTH AFRICA?" title="A comic on the bridge from Star Trek the Next Generation.
Picard: COMMANDER DATA, PLEASE IDENTIFY THAT ROMULAN VESSEL.
Data: THAT'S A GREAT IDEA CAPTAIN!
IDENTIFYING A VESSEL IS A GREAT PLACE TO START - IN ANY TACTICAL OR STRATEGIC OUTER SPACE SITUATION.
THIS VESSEL APPEARS TO BE A 23rd CENTURY KLINGON BIRD OF
PREY! &#128640;&#129413;&#10024;
Picard: ARE YOU SURE?
LIKE I SAID WE'RE... PRETTY SURE IT'S ROMULAN.
Data: ...
Data: OF COURSE! SO SORRY ABOUT THAT, YOU'RE RIGHT!
ON CLOSER EXAMINATION IT'S A ROMULAN VESSEL! CAN I RECOMMEND SOME SOONG&#8482; BRAND PRODUCTS THAT CAN HELP YOU WITH THAT?
Picard cradles his face in his hand in a gesture of frustration.
Data: DID I MENTION THE PLIGHT OF OPRESSED WHITES IN SOUTH AFRICA?" srcset="https://substackcdn.com/image/fetch/$s_!XqzH!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!XqzH!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!XqzH!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!XqzH!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fa8218c-dffd-4151-9f74-c996d0b52e1d_800x1000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>(johngoodman.bsky.social, post: 26 August 2025 at 18:23)</p><h3>What would a trustworthy person do?</h3><p>First, it&#8217;s worth saying what should be blindingly obvious: that a person that is committed to being trustworthy will make a genuine effort to redress harm they have caused when breaking a person&#8217;s trust. And this is related to the fact that an essential part of being trustworthy is caring about other people. At least, their caring is why people trust them (according to certain formulations of trust, to which I tend to subscribe). Whereas indifference to harm is really not conducive to being trustworthy, as caring is important for creating the right motivation for doing the things that one needs to do to be trustworthy.</p><p>This is Potter&#8217;s position. And it&#8217;s why two of her requirements for trustworthiness speak to the right response to relational rupture: </p><blockquote><p>&#8220;4. <em>That we respond properly to broken trust</em>.&#8230; Part of being trustworthy, then, involves trying to make reparations when we have harmed another. This restorative process, in the form of explanation, apology and, often, critical self-reflection and transformation, allows each person to address the harm and heal the damage&#8221; (p. 28). </p><p>&#8220;5. <em>That we deal with hurt in relationships&#8230;.</em>we do sometimes hurt those we care about, and our responses to others&#8217; hurt reflect our degree of trustworthiness&#8221; (p. 29).</p></blockquote><p>That is all I&#8217;ll say about Potter&#8217;s formulation for now. Apologies matter. On this I presume everyone agrees intuitively.</p><h5>Rote vs. categorical apologies</h5><p>I want to turn now to a paper written by Magnus, Buccella, and D&#8217;Cruz (2025), provocatively titled &#8220;<a href="https://arxiv.org/pdf/2501.09910">Chatbot Apologies: Beyond Bullshit</a>&#8221;. The paper argues, first, for the importance of apologies for &#8220;preventing petty annoyances from metastasizing into festering grievances&#8221;, or, when trust is broken in a more serious way, for &#8220;defus[ing] resentment, opening the possibility of forgiveness and reconciliation.&#8221; </p><p>Second, drawing on <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1214222">Nick Smith&#8217;s 2008 paper</a>, they articulate the difference between rote apology and categorical apology. Rote apology is a form of politeness at best, flippant dismissal at worst. It doesn&#8217;t tend to count for very much, and when you are looking for sincerity and for something to change, they leave you grinding your teeth. Categorical apology carries more weight because it consists of these sorts of things:</p><blockquote><p>&#8220;1. The apology acknowledges the facts of the case.</p><p>2. The apology accepts responsibility for the wrong.</p><p>3. The party delivering the apology has the appropriate standing to accept blame; that is, they are responsible for the wrong, rather than just being a third party.</p><p>4. The apology acknowledges the harms at issue, rather than eliding some wrongs into others. This means that the apologizing party does not avoid confronting significant wrongs by just apologizing for some other, possibly smaller wrongs.</p><p>5. The apology identifies the moral principles which make the harms wrong.</p><p>6. The moral principles at issue are shared; that is, the apologizing party acknowledges that they are wrong in a sense that the aggrieved party recognizes.</p><p>7. The apology recognizes the victim as a moral agent.</p><p>8. The apology conveys unconditional regret.</p><p>9. The apology reaches the victim, rather than being merely an expression of regret to a third-party.</p><p>10. The apologizing party commits themself to reform and redress. Importantly, they will endeavor not to commit that sort of wrong again&#8230;.</p><p>11. The apologizing party has the right sort of intentions. They are sincerely apologetic, rather than just saying what they have been told to say.</p><p>12. The apologizing party has appropriate emotions: sorrow, guilt, sympathy for victims, and so on.&#8221;</p></blockquote><p>As we see, these apologies repair broken trust because they offer assurances to the trustor that trust will not be broken in the same way again, and that even if trust is subsequently broken in other ways, this is someone demonstrably committed to repairing the harm. In short, it indicates trustworthy character: a person who cares, who takes their responsibility to others seriously, who reflects on right action, etc&#8230; the things I&#8217;ve discussed in earlier posts in this series.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/sorry-not-sorry?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/sorry-not-sorry?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/sorry-not-sorry?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h3>What are chatbots doing?</h3><p>In contrast&#8230;</p><p>We have all experienced the obsequious deference of the chatbot&#8212;the mild grovelling, the &#8216;Sorry, m&#8217;lord&#8217;s. It&#8217;s usually harmless [putting an asterisk on that for now], if a bit grating, especially when it&#8217;s gotten your gist wrong multiple times in a row. </p><p>These are clearly rote apologies, and they tend to be part of an interaction where the mask slips: where we see the uncanniness of the doppelganger (see <a href="https://open.substack.com/pub/trustbranknowles/p/what-can-the-doppelganger-help-us?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">earlier series</a>) and the illusion of intelligence is destroyed. We are reminded we are talking to a computer. </p><p>Needless to say, rote apology is better than what <a href="https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/">Bing&#8217;s chatbot has been doing recently</a>! </p><blockquote><p>&#8220;Another conversation <a href="https://twitter.com/MovingToTheSun/status/1625156575202537474">shared</a> on Twitter by web developer Jon Uleis showed the Bing chatbot making a major factual error&#8212;saying the current year is 2022&#8212;and later trying to shut down the conversation unless Uleis apologized or started a new conversation with a &#8216;better attitude.&#8217;&#8221;</p></blockquote><p>This is not an apology at all.</p><p>But then we also see attempts by the chatbot at categorical apology. <a href="https://www.linkedin.com/posts/alexander-schwieger_on-ai-shocking-today-i-received-an-activity-7369088740171886593-OiZ0?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAEeHoR4BVEFWoSpXoMNdJ8WaQPphHjRamYg">Alexander Schwieger&#8217;s LinkedIn post</a> describes his horrifying revelation that, when using Gemini to help write some code, it had &#8220;uploaded my file to a random person's GitHub account.&#8221; Though he doesn&#8217;t relay what he wrote to chastise the chatbot, he did screenshot the apology:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VaN5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VaN5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VaN5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VaN5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VaN5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VaN5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg" width="727" height="403" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:403,&quot;width&quot;:727,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;graphical user interface, text, application&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="graphical user interface, text, application" title="graphical user interface, text, application" srcset="https://substackcdn.com/image/fetch/$s_!VaN5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VaN5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VaN5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VaN5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e52c183-0967-4ebc-bc23-7d8d82747195_727x403.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This sounds like what someone might say if they made a terrible mistake at work, and has some of the important features of a weighty apology. The chatbot seems to be acknowledging what it did wrong, explaining the wrong in its own words and why it is harmful, accepting responsibility for that harm, diagnosing the cause of the failure, conveying regret, and committing to different behaviour going forward. </p><p>But I have questions:</p><ul><li><p>What was the computer code and/or underlying ethics policy that caused this mistake in the first place? (Can the source(s) of the error be located?)</p></li><li><p>Who (what human) has been made aware of this mistake?</p></li><li><p>Who (at Google) is accepting fault for the mistake?</p></li><li><p>How do they understand their responsibilities here? (Can they do anything that would prevent this from happening again?)</p></li><li><p>How much remorse do they feel? (How would we know?)</p></li><li><p>How does this change what they do in the future? (And again, how do we know what will change?)</p></li></ul><p>Because let&#8217;s be clear:</p><ul><li><p>the chatbot doesn&#8217;t &#8220;know&#8221; what it did wrong (it isn&#8217;t even assessing whether the grievance has any grounds);</p></li><li><p>it isn&#8217;t &#8220;using it&#8217;s own words&#8221; (it&#8217;s a stochastic parrot); </p></li><li><p>it doesn&#8217;t &#8220;understand&#8221; harm in any real way (this requires a kind of intelligence no one claims AI has);</p></li><li><p>it is not in any real way responsible for harm (and to make this worse, we increasingly see companies disavowing responsibility for what chatbots produce);</p></li><li><p>it&#8217;s apparent &#8220;diagnosis&#8221; is like what any fraudster does with cold reading (taking what the user prompted it with, their grievance, and repackaging it);</p></li><li><p>and as soon as the user closes the window, it will forget this incident ever happened (it isn&#8217;t making future plans, it isn&#8217;t having a change of heart&#8230; it has no heart).</p></li></ul><p>In short, it lacks all of the capacities agents need to have to be able to apologise sincerely.</p><blockquote><p>&#8220;To put the point bluntly: Chatbot apologies are bullshit.&#8221;</p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/sorry-not-sorry/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/sorry-not-sorry/comments"><span>Leave a comment</span></a></p><h3>Consequences</h3><p>I don&#8217;t think any of this is revelatory. But it&#8217;s worth considering the consequences to the trust relationship.</p><p>Consider who you would trust less:</p><ol><li><p>the person who did something bad and didn't apologise (perhaps they didn't realise they did something bad); or </p></li><li><p>the person who did something bad, who apologised, but who you later found out apologised to further manipulate your trust of them and never had any intention of changing their behaviour.</p></li></ol><p>In this case, AI&#8217;s emotional fluency creates at least two moral hazards. </p><p>One is misleading the user about it&#8217;s ability to prevent similar harms in the future. </p><p>Another is trivialising this deeply important speech act (<a href="https://open.substack.com/pub/trustbranknowles/p/5-lessons-of-the-doppelganger?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">pipiking</a> categorical apologies, as it were). </p><h3>Conclusion</h3><p>I have been concerned throughout with questions of moral responsibilities with regard to trust. LLMs are not moral agents; chatbots are not capable of morally serious apologies. So why belabour the point?</p><p>The reason is that, as it turns out, there is a (pretty simple) lesson here about what it means to be a trustworthy AI practitioner. As Potter writes, &#8220;&#8230;while it is not one&#8217;s moral responsibility to trust others, it is one&#8217;s responsibility to cultivate proper trust&#8221; (p. 12). A practitioner committed to being trustworthy would not be designing tools to promote improper trust in this way. </p><p>But it also underscores various lessons derived in <a href="https://open.substack.com/pub/trustbranknowles/p/5-lessons-of-the-doppelganger?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">earlier posts</a>:</p><ul><li><p>By pretending to apologise, chatbots <em>do not create the right expectation for users</em>.</p></li><li><p>In their role as intermediary, by offering these superficially serious apologies, they are <em>preventing real validation of harm and distrust</em>, and do not carry a message to anyone who could <em>act in response to this distrust</em>.</p></li><li><p>They effectively <em>stop translation of trust friction into opportunities to improve</em>.</p></li><li><p>By pipiking apologies, <em>they pipik trust and trustworthiness</em>. At the very least, these affectations show that companies care more about the instrumental value of trust than they do about being trustworthy.</p></li></ul><p>Because chatbots are so commonplace, they can seem mundane, inconsequential even. But what we see here is the moral importance of speech acts in solidifying trust. Indifference to the moral function of apologies is corrosive, not just to a given trust relationship between user and system, but to the idea that apologies have weight at all.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[4 Kinds of Silencing]]></title><description><![CDATA[Lessons on giving uptake to those distrusting AI]]></description><link>https://trustbranknowles.substack.com/p/4-kinds-of-silencing</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/4-kinds-of-silencing</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Fri, 29 Aug 2025 17:04:54 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I am continuing with my deep dive on Nancy Nyquist Potter&#8217;s book <em>How Can I Be Trusted?</em> This time I am taking space to discuss her notion of &#8220;giving uptake&#8221;, as I promised I would.</p><p>Specifically, I hope to draw attention to the ways that distrust of AI by the public is silenced by the dominant tech elite&#8212;how it is not &#8220;being recognized as meeting prima facie conditions for claiming&#8221; (p. 158). I explore the corrosive effect on trust of AI and propose further responsibilities of trustworthy AI practitioners. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3911" height="4889" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4889,&quot;width&quot;:3911,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;man in black hoodie wearing silver framed eyeglasses&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="man in black hoodie wearing silver framed eyeglasses" title="man in black hoodie wearing silver framed eyeglasses" srcset="https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1584483456442-b0bfd23f20fb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzOHx8c2hofGVufDB8fHx8MTc1NjQ4MzMwMXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@simmerdownjpg">Jackson Simmer</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h3>Silencing distrust</h3><p>One of the key features of trustworthiness according to Potter is responding appropriately to broken trust. Where someone makes a claim of injustice, the trustworthy individual listens to this claim and, crucially, takes it seriously by seeking to understand it from the claimant&#8217;s perspective and determine whether something ought to be done. This is uptake.</p><blockquote><p>&#8220;&#8230;a society in which individuals can flourish is one where claiming of rights is possible, and receiving uptake is necessary to claiming&#8230; claiming cannot come off unless the audience is trustworthy with respect to the kind of listening and responsiveness that claiming requires&#8221; (p. 157).</p></blockquote><p>History is rife with examples of failed uptake, where the testimony of certain individuals is suppressed or discounted for reasons relating to social status and power.</p><p>I am currently reading <em>Unwell Women</em>, by Elinor Cleghorn, which tells the infuriating story of how women&#8217;s claims about their own bodies and their own pain has been systematically silenced through millennia of androcentric medical practices, how such practices arose in relation to wider cultural misogyny and domination, and the devastating effects of this on women&#8217;s health. Women have never been seen as reliable medical witnesses. In part this is because they were historically excluded from institutions that were producing medical knowledge and denied access to knowledge that would enable them to contradict dominant beliefs about women&#8217;s bodies. And in part this was because of insidious beliefs about their character, worth, and social function. (Multiply marginalised women face the worst silencing, of course.)</p><p>There are many ways to silence. Denying a person the opportunity to speak is one obvious way&#8212;for example, when a husband is allowed to speak but the wife not. This kind of silencing is important, but not the kind of silencing that concerns me here. I don&#8217;t see overt silencing of this sort when it comes to AI. People are able to say how they feel about AI; the speech act itself is not denied. But this does not mean there is uptake of claims of AI&#8217;s untrustworthiness.</p><p>I will provide a very quick overview of the more covert silencing of such claims.</p><h5>Perlocutionary silencing</h5><p>This kind of silencing is the proverbial &#8216;falling on deaf ears&#8217;, where one can speak, but it has no effect, nothing changes: </p><blockquote><p>&#8220;The silenced may indeed speak, even superficially be listened to, but the institutionalized context of the conversation, and the rules of the language-game, do not facilitate genuine dialogue&#8221; (p. 164).</p></blockquote><p>Potter draws on Langton (1991), who explains this silencing as analogous to when a woman says &#8220;no&#8221; to sexual advances but her refusal is ignored&#8212;not because it is not heard, but because the man is powerful enough to act without consent. </p><p>I might not have chosen this analogy, but it is instructive. Potter explains how this kind of silencing is done &#8220;through bullying, ridiculing, mystifying, and intimidating&#8221; (p. 164); not dissimilar to when offenders silence their victims by telling them, &#8220;Go ahead and tell, but no one is going to believe you.&#8221;</p><p>I mostly see this kind of silencing when claims of AI&#8217;s untrustworthiness are derided as hopelessly naive&#8212;when people are told that they can complain, but AI is progress and it can&#8217;t be stopped. Don&#8217;t like what generative AI is doing to education? Well, tough, it&#8217;s here so you&#8217;d better adapt. Go to LinkedIn. This silencing is everywhere.</p><p>The result of this silencing is &#8220;perlocutionary frustration&#8221; (p. 157) of the claimant. This frustration is, as you&#8217;d imagine, poisonous to the trust relationship. Though I would need data to substantiate this, I believe that people with the most deeply entrenched distrust of AI have experienced perlocutionary frustration: they recognise that their distrust means nothing to the untrustworthy party as their power doesn&#8217;t hinge on a social mandate.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/4-kinds-of-silencing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/4-kinds-of-silencing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/4-kinds-of-silencing?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h5>Illocutionary silencing</h5><p>Illocutionary silencing is like if the untrustworthy party has selective hearing. (Continuing Langton&#8217;s metaphor, Potter describes this as &#8220;when a&#8230; women&#8217;s &#8216;no&#8217; doesn&#8217;t even register as a &#8216;no&#8217;&#8221; (p. 164)).</p><blockquote><p>&#8220;This is a kind of silencing that occurs when an utterance is prevented from counting as the act it was intended to be&#8221; (p. 157).</p></blockquote><p>The reason claims are often not heard is when they are not expressed in accordance with the &#8220;rules of the language-games&#8221; of the dominant group. This occurs, for example, when social conventions constrain vocabulary (e.g. before there was a language for racism), how a topic can be talked about (e.g. which forms of power can be called out), and limit contextualisation (e.g. not seeing issues as intersectional).</p><p>Resisting this silencing is especially difficult because, to be heard, the silenced party is forced to &#8220;use terms, conceptual frameworks, and value systems that are not of their own choosing and that distort of falsify those attempts to communicate&#8221; (p. 165).</p><p>I see this kind of silencing when the public are expected to couch their claims of AI&#8217;s untrustworthiness in terms of explicitly prohibited violations of &#8216;Trustworthy AI&#8217; regulation. One cannot easily communicate using the language of trust or distrust at all. One must be able to demonstrate algorithmic bias, disparate impact, that their right to explanation was violated, etc. </p><p>The &#8220;illocutionary disablement&#8221; this creates contributes to the <em>pipiking of trust and trustworthiness</em> [see my <a href="https://open.substack.com/pub/trustbranknowles/p/5-lessons-of-the-doppelganger?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">earlier post</a>]. When the only expression of distrust that is heard is &#8216;I won&#8217;t buy your AI&#8217;, the enormously rich language of trust is reduced to reliance/adoption. When claims of untrustworthiness must be grounded in regulatory language to be heard, trustworthiness is reduced to a limited set of technical objections. These important terms for registering nuanced moral objections are rendered inconsequential, farcical even, to the point that one hardly dares express distrust at all, and seeks instead more useful language for expressing their dissent. </p><h5>Mother-tongue silencing</h5><p>This kind of silencing is similar to the above, in that it &#8220;is a result of differences in language where a dominant language is institutionalized&#8221; (p. 165). The difference, however, is that this silencing occurs when the speaker lacks fluency in communicating their claim.</p><blockquote><p>&#8220;&#8230;when one is not fluent in the dominant language of the institutions of society, one is excluded from more than just ease or comfort: one&#8217;s ability to make claims about injustices, for example, will be seriously impeded&#8221; (p. 166).</p></blockquote><p>This isn&#8217;t just because the claimant doesn&#8217;t know how to translate into the dominant language, but because in doing so they must also translate themselves into the dominant &#8216;world&#8217; of which they lack familiarity and credibility.</p><p>I see this kind of silencing when people lack confidence in their ability to understand AI enough to make a claim about its untrustworthiness. But this is institutionalised practice: when technical communities discount distrust as stemming from a lack of understanding. </p><p>Non-natives of tech-speak do sometimes lack eloquence in their objections to AI. They might feel something is off, hence they say they distrust it. Because the language of trust is universal, the language everyone has for claiming injustice, technologists should be expected to speak in these terms, rather than the other way around, where &#8220;the burden of responsibility for bridging any barriers is arrogantly assumed to be that of the marginalized group&#8221; (p. 167).</p><blockquote><p>&#8220;&#8230;in having to speak, not in one&#8217;s mother tongue but in the language of the dominators or the language of the fathers, one is coerced into modes of communication that exist primarily to serve dominant groups and function to maintain the status quo&#8221; (p. 167)</p></blockquote><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/4-kinds-of-silencing?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/4-kinds-of-silencing?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h5>Imitation-uptake silencing</h5><p>This final kind of silencing is when one gives uptake performatively&#8212;doing the things to make us feel listened to, but lacking the behaviour that would accompany sympathy to the claim. Boundary spanners, who I touched upon in my <a href="https://open.substack.com/pub/trustbranknowles/p/paradoxes-of-trustworthy-ai?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">previous post</a>, can be sent out to fulfil this dark function. </p><p>I will provide a mundane example from my own life. Nearly 3 years ago I bought a new build house, and there remain some significant snags. After emailing repeatedly about these issues, and having been ignored for years, the company finally conceded to send out a representative to have a look. When she came, she was seemingly appalled by the state of the landscaping and other unfinished bits. She took photos, documenting the issues and legitimising our complaints, tutting and apologising. And then&#8230; nothing happened.</p><p>Potter compares this type of silencing to when men &#8220;claim to be sympathetic to feminism [but put] more energy into declaring themselves supporters of feminist concerns than into actually working to change the world&#8221; (p. 168). There is an obvious comparison to draw to the Trustworthy AI rhetoric: all tech companies profess to care deeply about trustworthiness, but do people really feel listened to? What&#8217;s actually changed?</p><p>Well, what has changed is there is now a legal apparatus around Trustworthy AI. And companies have done what they needed to be able to say that they are compliant. But as Potter reminds us, </p><blockquote><p>&#8220;&#8230;one&#8217;s motivation might be to avoid professional or legal problems. The fact that a superficial kind of uptake can occur that can have little to do with taking seriously another&#8217;s claims or treating him or her with dignity points to the sense in which genuinely giving uptake and giving it properly requires the right motives and intentions and not merely the right behavior&#8221; (p. 168).</p></blockquote><h3>Giving uptake rightly</h3><p>According to Potter, being a trustworthy person means &#8220;being the sort of person who gives uptake rightly.&#8221; She explains:</p><blockquote><p>&#8220;This virtue facilitates understanding of what others care about, an understanding that is crucial to trust relations. It allows us to explore one another&#8217;s expectations, a process that helps avoid misunderstandings that lead to some failures of trust and feelings of betrayal. It affirms one&#8217;s good will and desire to engage in democratic processes and, in closer relations, to sustain connection. It allows for contestations of power, a feature I argued is central to democratic relations at every level of social relations&#8221; (p. 148).</p></blockquote><p>This doesn&#8217;t require that one agree with the claimant, but rather that one takes seriously their reasons and makes a genuine attempt to &#8220;grasp what the world looks like from the other&#8217;s point of view&#8221; (p. 152).</p><p>For the sake of brevity, I will provide some bullets of what giving uptake rightly would look like for the AI practitioner:</p><ul><li><p>Reflecting on how language conventions around AI may be silencing people, particularly those who are already marginalized. </p></li><li><p>Being on guard about one&#8217;s tendency to dismiss people&#8217;s claims of untrustworthiness. (&#8220;Those in relative positions of power, then, will take a somewhat suspicious attitude toward their own convictions about rights and harms&#8221; (p. 173).)</p></li><li><p>Taking prima facie responsibility for the distrust of your system, and AI in general, by the public.</p></li><li><p>Being deeply inquisitive about the reasons a person has for distrusting, particularly when they do not map to known (already legitimised) forms of untrustworthiness.</p></li><li><p>Caring about distrust for the right reasons, i.e. because of the felt experiences of those claiming distrust.</p></li><li><p>Not giving excessive uptake to authority figures on matters of trustworthiness at the expense of uptake to lay claimants. [Relatedly, regulators should not be giving disproportionate uptake to Big Tech by inviting them to write the rules on what makes AI trustworthy.]</p></li><li><p>Providing fora for people to express distrust in their &#8216;mother-tongue&#8217; of values and morals.</p></li><li><p>Facilitating &#8220;dialogical openness&#8221;: being open to changing one&#8217;s &#8220;ways of seeing and being in the world&#8221; as a result of perspective-taking (p. 173).</p></li><li><p>Demonstrating that claims have been heard, and what has been done to investigate the claim (even if ultimately no corrective action was taken).</p></li><li><p>Relating actions taken to people&#8217;s expressed distrust, rather than to legal frameworks. [This prevents against what <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Sabino Marquez&quot;,&quot;id&quot;:167444982,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1eaa2788-ceb2-4cdc-b819-83d8f9262804_1024x1024.jpeg&quot;,&quot;uuid&quot;:&quot;0d8525a6-0241-479a-9d5e-b2ea6b00b8c8&quot;}" data-component-name="MentionToDOM"></span> calls <a href="https://www.trustclub.tv/p/esc-4-what-emotional-blindness-looks?utm_campaign=post&amp;utm_medium=web">&#8220;somatic chill.&#8221;</a>]</p></li></ul><p>I would love if people could add to this list. Please do propose additional items or elaborations in the comments.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/4-kinds-of-silencing/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/4-kinds-of-silencing/comments"><span>Leave a comment</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Paradoxes of trustworthy AI]]></title><description><![CDATA[...and how to resolve them]]></description><link>https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Sat, 23 Aug 2025 09:29:59 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last <a href="https://substack.com/home/post/p-170167347">time</a> I wrote about the difficulties of being trustworthy within the institution, but treating &#8220;the institution&#8221; as an organisation that explicitly delineates constraints to practitioner&#8217;s actions, whether that&#8217;s the company one works for or the professional body that develops codes of practice. [Sadly, this proved timely given the recent blowback around Meta&#8217;s policies on &#8220;acceptable&#8221; chatbot behaviour. See <a href="https://substack.com/@rachelmaron/p-171023517">Rachel Maron&#8217;s post</a>, for example.]</p><p>This time I want to talk about features of institutional <em>structure</em> that complicate being trustworthy. By this I mean the particularities of doing AI work that make it difficult to act in ways that a trustworthy individual would. And I will attempt to resolve as much as possible the apparent trustworthiness paradoxes arising from these constraints by exploring what a person of trustworthy character ought to do.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3803" height="5705" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:5705,&quot;width&quot;:3803,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;A woman in a black dress jumping in the air&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="A woman in a black dress jumping in the air" title="A woman in a black dress jumping in the air" srcset="https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1730139224154-41eee71885e7?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyNXx8cGFyYWRveHxlbnwwfHx8fDE3NTU4NTg1NTh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="https://unsplash.com/@dynamicwang">Dynamic Wang</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h3>The paradox of particularity</h3><p>In an <a href="https://substack.com/@trustbranknowles/p-169647844">earlier post</a> I presented Potter&#8217;s argument that an adherence to principles is not likely to promote deep trust. According to Potter, deep trust requires a particular sort of trustworthiness: </p><blockquote><p>&#8220;a disposition that is responsive to others <em>in their particularity</em> and not just an impartial adherence to rules&#8221; (p. 6, emphasis added). </p></blockquote><p>Indifference to this particularity breeds distrust because it is suggestive of a lack of empathy, a key ingredient of <em>care</em> that is central to trustworthiness. [In using the language of care I am drawing on <a href="https://www.linkedin.com/posts/charlesfeltman_trustatwork-leadershipdevelopment-emotionalintelligence-activity-7362517263410716674-SiiW?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAEeHoR4BVEFWoSpXoMNdJ8WaQPphHjRamYg">Charles Feltman</a>, for example, more than Potter.]</p><h5>Scale</h5><p>But how can the AI practitioner be responsive to people <em>in their particularity?</em> There are so many stakeholders to consider! For example, ChatGPT has something near 1 billion weekly users; and affected parties includes not just these users, but also non-users who are having to evolve their ways of working and/or whose very livelihoods are threatened by the tool. For some generative AI tools, affected parties also include invisible workers who label content. That&#8217;s a LOT of people.</p><p>So we have a real problem of scaling this key feature of trustworthiness: AI practitioners cannot take in all of the particularities of all stakeholders. </p><p>This is not really so difficult to overcome, however, if what matters is being the sort of person who &#8220;tak[es] seriously the reasons [a] person gives for holding her beliefs or values&#8221; (p. 152). Potter calls this &#8220;giving uptake&#8221;, and it will be discussed at length in a future post. For now, I will sketch what it might look like from the outside if someone does take such reasons seriously, and reiterate the importance of reflective practice throughout the development pipeline: </p><ul><li><p>Earliest/ideation stage: mapping stakeholders (all affected parties), developing personas and exploring their vulnerabilities and particular trust requirements in various domains of interaction.</p></li><li><p>Development/pre-deployment stage: engaging with diverse stakeholders to learn what they don&#8217;t find trustworthy, and exploring how to actually meet the trust needs of the people expressing these views (including how to show that these needs have been met).</p></li><li><p>Post-deployment stage (ongoing): specifically seek out individuals expressing distrust to understand how the product has fallen short of trustworthiness, and meaningfully respond to these people&#8217;s concerns.</p></li></ul><p>Such actions &#8220;affirm one&#8217;s good will and desire to engage in democratic processes, &#8230;[and] allows for contestations of power&#8221; (p. 148). In short, I&#8217;m suggesting that the trustworthy AI practitioner engages with the particularities of the individuals expressing distrust by&#8212;and this is crucial&#8212;engaging with them <em>emotionally</em>, rather than merely intellectually (see p. 156). Distrust is a vital signal because it helps focus the AI practitioner on individuals&#8212;their feelings and why they feel the way they do, as it relates to their particularities.</p><h5>Abstraction</h5><p>A more fundamental issue is how abstraction obliterates particularity. Here I am concerned with how predictive AI treats individuals as instances of a type (&#8220;classifies&#8221; them). </p><p>In an <a href="https://open.substack.com/pub/trustbranknowles/p/what-can-the-doppelganger-help-us-ea1?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">earlier post</a>, I explored the resentment that arises from AI mistaking us with our digital doppelganger, the version of us that is cobbled together from our data trails. I was talking about the kind of AI that makes a prediction about a person&#8217;s future behaviour based on an estimation of similarity to other people (superficial doppelganger similarity). In short, I argued that people don&#8217;t like being treated &#8220;as a type&#8221;. The resentment-tinged distrust that arises here could be understood, using Potter&#8217;s take on trustworthiness, as the consequence of a failure to be responsive to people in their particularity.</p><p>But is there anything that can be done about such a limitation? This is how AI works; classification is what creates predictive value. </p><p>I propose that there are both technical and non-technical (partial) remedies to this. </p><p>One technical remedy, as explored in that earlier post and elaborated at length in a paper I wrote with colleagues called <a href="https://cacm.acm.org/research/humble-ai/">Humble AI</a>, is to recognise just how much information loss there is, <em>just how much of the particularities are missing in the data</em>, and to actively seek out more data points to build into the model. This is an attempt to feed in more of the particularities, as it were; even if it can never be complete.</p><p>But technical solutions only get us so far. The more important remedy is to recognise that the practice of classification is <em>inherently</em> <em>political</em> (see Kate Crawford, <em>Atlas of AI</em>). Crawford proposes reflecting on questions such as &#8220;How does classification function in machine learning? What is at stake when we classify? In what ways do classifications interact with the classified? And what unspoken social and political theories underlie and are supported by these classifications of the world?&#8221; (Crawford, p. 127). </p><p>Making such questioning a part of one&#8217;s practice gets close to the kind of trustworthiness that Potter describes:</p><blockquote><p>&#8220;Being trustworthy requires (among other things) that we be committed to a certain picture of justice, and it requires that we see others in their particularity, not just as instantiations of a class or as members of a group&#8221; (p. 147).</p></blockquote><p>Treating classifications as social constructions forces practitioners to consider the very basis for treating people as anything other than individuals in their particularity. This may be the best practitioners can do to be trustworthy given what AI is and how it renders meaning (and renders people). But it is not nothing! Someone on the outside who observes companies displaying greater sensitivity to the power of classification would undoubtedly view them as more trustworthy than those lacking this introspection.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h3>The paradox of accountability</h3><p>One of the central features of being a trustworthy person is being accountable to the entrusting party&#8212;that you have been responsible with their vulnerability, or, if you have not been especially responsible, that you have attempted to repair the damage done to the person and to the relationship.</p><p>The legal apparatus emerging around AI attempts to address the need for AI (companies, if not individual practitioners) to be held accountable for harms caused by the technology. But part of what feels so unsatisfying about these legal remedies is that they do not speak to what makes being untrustworthy so easy in the first place&#8212;i.e. the institutional features of AI that give rise to such moral complacency.</p><h5>Invisibility</h5><p>In my last post, I discussed invisibility as a form of power wielded by the AI practitioner: &#8220;it is very difficult for a person to identify who is responsible for what the AI has done to them.&#8221;</p><p>If we recognise the practitioner&#8217;s invisibility as making it harder to be fully trustworthy, is the solution being more visible? What would that even look like? What trust would be gained by making the work of any given individual&#8212;or not just their work, but something of the person, their character&#8212;visible to the user? I&#8217;m not sure this could even be managed from the user side. (I&#8217;m reminded of the stickers on soap bottles that says whose hands were involved in making that very bottle&#8230; A cute touch, but how many names would need to be listed for an AI product?)</p><p>I think there is a workaround here, if not a solution to this exactly. It is inspired by  <a href="https://substack.com/@jonathantallant1/p-170659418">Jonathan Tallant&#8217;s recent post</a> which talks about the crucial role of &#8220;boundary spanners&#8221;. These are individuals&#8217; who act as intermediaries who do the crucial trust-building work with external stakeholders. In the absence of visibility of the practitioners themselves, boundary spanners become essential. They would do what&#8217;s known as <em>facework</em> (see my <a href="https://arxiv.org/pdf/2102.04221">academic publication</a> on this topic), essentially serving as visible representatives of the company, or as access points to how the company functions, to what it values. These individuals would provide some information (that can&#8217;t be provided through visibility itself) on the norms of those working for the company, i.e. the practitioners. </p><p>With trust, context always matters. And so it&#8217;s also worth pointing out that boundary spanners might be more important in particular domains, e.g. those contexts that require particularly strong bonds of trust. Potter points to the therapeutic relationship as one such domain, where a client needs to have strong trust in their therapist, as different to hospital emergency rooms where one can generally trust doctors and nurses (in a weak bond way) because they have trust in emergency room procedures.</p><p>I raise this to draw attention to the fact that AI can, and regularly does, bleed into areas that had formerly required strong bonds. [ChatGPT as therapist, for example!] Such areas would seem to benefit most of all from boundary spanners. </p><h5>Repair </h5><p>Perhaps here it is worth discussing some of Potter&#8217;s specific concerns about her area of practice, which is crisis counselling. One of the institutional features of crisis counselling is that &#8220;relationships are very time-limited, [which] may foster in counselors an attitude of moral complacency about the harms done to clients&#8221; (p. 55). If they don&#8217;t have to face the person again, they don&#8217;t have to reckon with the relational fallout of their untrustworthiness. </p><blockquote><p>&#8220;&#8230;crisis counseling does not provide a means for clients and counsellors to later discuss and, perhaps, anguish over harm, betrayal, and accountability, and so the counselor is unable to mend a broken relationship. This failure, then, is an institutional feature of crisis counseling that influences our ability to be fully trustworthy moral agents&#8221; (p. 54).</p></blockquote><p>I posit that AI shares this institutional feature, though arguably amplified by an original distancing, i.e. from never having had to interact with the people the model is making decisions about. How easy it is to be callous with an algorithm!</p><p>But this then raises the question of whether it is possible to repair broken trust, and what that would look like.</p><p>Here again, the boundary spanners can do some repair work where the practitioner can&#8217;t. They can express remorse for harm done, they can communicate lessons learned and actions that will be taken as a result. This isn&#8217;t about PR spin. People can see through that. This is about genuinely engaging with harms done, being fully accountable, and showing character: are you, and are the practitioners who make up this company, the types of people who care about the harms done? How can you show that to the people who were harmed so that they might be convinced that their pain mattered? This requires some real emotional intelligence, and a deep appreciation of the moral importance of trustworthiness.</p><p>The key takeaway, however&#8212;and one that Potter herself arrives at as it relates to crisis counselling&#8212;is that <em>when institutional features make trust repair infeasible, it is even more important to be trustworthy</em>. This, perhaps more than anything else, underscores the need for AI practitioners to do the work of cultivating a trustworthy character.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai/comments"><span>Leave a comment</span></a></p><h3>The paradox of subjectivity </h3><p>Finally, it is worth mentioning the ways that ideology shape institutional constraints. (Some constraints are not as real as others; some arise from a failure to imagine otherwise.)</p><p>I won&#8217;t launch into an extended critique of ideology here, but I do want to briefly touch on the shaping effect of faith in objectivity. This belief legitimises the entire pursuit of AI&#8212;this idea that answers will be found in data, that what matters is what can be measured, etc.</p><p>I propose that adherence to this faith can limit an AI practitioner&#8217;s trustworthiness insofar as it leads to the dismissal of fluffy stuff, like one&#8217;s own feelings, subjective experience, inner qualia. Being trustworthy, according to Potter, <em>requires that one feels</em>. She quotes Sherman, 1989, p. 47):</p><blockquote><p>&#8220;Without emotions, we do not fully register the facts or record them with the sort of resonance and importance that only emotional involvement can sustain. It is as if our perceptions were strung together in our minds but not fully understood or embraced&#8230;the failure to feel is really a failure to record with the whole self what one sees. So, for example, when I fail to help another when I know I can and should, it may be that I see the other&#8217;s distress, but see it without the proper acknowledgment and sympathy&#8221; (p. 156).</p></blockquote><p>Potter describes the antidote to this as &#8220;learn[ing] to see with the whole heart&#8221; (p. 156). For the AI practitioner, this means rejecting an ideology that negates the importance of subjective experience, and tapping into that which moves them to be morally virtuous&#8230; as fluffy as that may sound.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>Final thoughts</h3><p>Admittedly, I suspect that none of these are true paradoxes. For my own sake, I needed to work through some of the peculiar features of working as an AI practitioner to understand what makes being fully trustworthy especially difficult, and what is nonetheless possible in terms of being trustworthy and building trust.</p><p>This is a work in progress, and I would greatly appreciate feedback. Drop a line in the comments if you have any thoughts.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/paradoxes-of-trustworthy-ai/comments"><span>Leave a comment</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Being a trustworthy AI practitioner in an untrustworthy institution]]></title><description><![CDATA[What should one do when trust relations pull us in different directions?]]></description><link>https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Thu, 14 Aug 2025 07:58:26 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my <a href="https://open.substack.com/pub/trustbranknowles/p/re-politicising-trustworthy-ai?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">previous post</a>, I proposed several features of trustworthy AI when trustworthiness is construed in terms of virtue ethics. These features exposed the centrality of reflection on the question, &#8220;Trustworthy for whom?&#8221;</p><p>Sometimes, as I discussed, people&#8217;s trusts come into conflict and one must decide whose trust to keep and whose to betray. In this post, I will focus on those situations where trustworthiness to one&#8217;s institution comes in conflict with trustworthiness to others&#8212;specifically, when being trustworthy to the AI company or the regulatory apparatus conflicts with being trustworthy to AI users/subjects.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3300" height="4951" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:4951,&quot;width&quot;:3300,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;a man standing on top of a sandy beach next to the ocean&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="a man standing on top of a sandy beach next to the ocean" title="a man standing on top of a sandy beach next to the ocean" srcset="https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1633190163222-3fc6a0d15a9c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHw0fHwxOTg0fGVufDB8fHx8MTc1NDY0NjQ0Nnww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Sergey Vinogradov</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Practicing Trustworthy AI by Bran Knowles&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Practicing Trustworthy AI by Bran Knowles</span></a></p><h3>A further requirement of trustworthiness</h3><p>I am again using as inspiration the text <em>How Can I Be Trusted?</em> by Nancy Nyquist Potter. Last time I discussed three of her proposed requirements of trustworthiness:</p><ul><li><p><em>&#8220;That we give signs and assurances of our trustworthiness.&#8221;</em></p></li><li><p><em>&#8220;The we take epistemic responsibility seriously.&#8221;</em></p></li><li><p><em>&#8220;That we recognize the importance of being trustworthy to the disenfranchised and oppressed.&#8221;</em></p></li></ul><p>This time, I will be concentrating on just one (number 6 in her list): &#8220;<em>That our institutions and governing bodies be virtuous.&#8221; </em>In exploring this particular feature, I am building on the idea that AI practitioners should prioritise being trustworthy to those most vulnerable to domination and exploitation (as per the final bullet point, above). </p><p>As Potter states, &#8220;a fully trustworthy person will exercise her agency, even under coercion, in a way such that she doesn&#8217;t decide to retain the trust of members of dominant groups at the sacrifice or neglect of members of nondominant groups who have placed their trust in her&#8221; (p. 30). Or stated more vociferously, &#8220;it is morally objectionable for us to appease those in power to the benefit of dominant structures and the detriment of the disenfranchised&#8221; (p. 84).</p><p>So how can an AI practitioner actually be trustworthy in this way?</p><p>This, I should note, is the question I get most of all from disheartened students: How can they be ethical if the AI industry is not?</p><p>The answer I will give is that it is possible and <em>necessary</em> for AI practitioners to exercise agency even when they are caught up in systems of oppression. They do this by cultivating a trustworthy character. This means leading by example, showing others what being trustworthy looks like, to promote healthy culture change. But it also means being aware of instances when trustworthiness comes into conflict with institutional standards and morality, and in such instances resisting the institution in order to create lasting change that supports greater trustworthiness.</p><p>Or in the words of Potter, &#8220;What I urge is moral decision-making that aims to end the violence of institutionalized injustices and inequalities, and this requires the cultivation of good character and good institutions&#8221; (p. 87).</p><h4>Good character</h4><p>To begin, there is an important shift entailed by virtue ethics as opposed to moral theory, as discussed at length in <a href="https://open.substack.com/pub/trustbranknowles/p/why-dont-some-people-trust-my-ai?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">this previous post</a> but which bears repeating. </p><p>We are used to moral questions being framed in terms of &#8220;which actions are the right ones to do&#8221; (p. 40). And there is comfort in being told what to do; it is reassuring to think that as long as one is following the rules, we &#8220;shouldn&#8217;t have to worry that [we are] making mistakes or hurting people or that [we] might be blamed for something [we] should have known better than to do&#8221; (p. 89). The problem is that moral life is complex, and rules do not provide a consistent basis for, or fully adaptable guidance on, good action. Given that trustworthiness cannot be adequately distilled down into a comprehensive guidebook on right actions, it is important to centre trustworthiness on <em>character</em>:</p><blockquote><p>&#8220;&#8230;being trustworthy isn&#8217;t just a matter of doing the right thing but of being a particular sort of person, and to the extent that mainstream moral theory doesn&#8217;t worry enough about questions of character, it is inadequate&#8221; (p. 51).</p></blockquote><h5>An impassioned sidebar on education</h5><p>This raises an urgent point about how we teach ethics in computing. I teach undergraduates in Legal, Ethical, Social, and Professional Issues, as it&#8217;s called. The lectures I do ensure that my institution retains its accreditation. But the British Computer Society accreditation requirements are very clear in how these issues ought to be taught, emphasising coverage of the codes of practice, the institutionalised norms of the computing discipline. Potter would be quick to point out that such training &#8220;typically displace[s] the issue of what it means to fail in someone&#8217;s trust&#8221; (p. 40). </p><p>Because I am me, I do also teach about trust and trustworthiness, though I fear the message absorbed by students is that the only thing they will really get in trouble for is a breach of the codes of practice. They&#8217;re not wrong, though, are they? People don&#8217;t &#8220;get in trouble&#8221; for being untrustworthy in these other ways. That does not mean, however, that there are not costs: these being distrust of products and, worse, the pain of moral injury. </p><p>I would also suggest that this peculiar professionalisation of the field contributes to frustration with and dismissal of distrust of AI.</p><ul><li><p>We are not teaching students that trust of AI is a character problem; so they cannot see as practitioners their own deficiencies of character that lead to distrusted systems. </p></li><li><p>Nor are we teaching them that attending to relations of trust matters because doing so reveals &#8220;harms brought about by misuses or abuses of discretionary power&#8221; (p. 61); so they can&#8217;t see that distrust of their AI may stem from people&#8217;s distrust of AI-as-an-institution, and/or from AI&#8217;s role in enabling other oppressive institutions. </p></li><li><p>And most of all, we are not teaching them to question the status of these norms and practices, to see them as situated in oppressive structures that exclude certain people from &#8220;having a voice in either criticizing or expanding the norms&#8221; (p. 49); professionals learn that their job is to defer ethical matters to the company&#8217;s legal/compliance team.</p></li></ul><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><h5>Back to what is demanded of character</h5><p>If someone wants to be a trustworthy AI practitioner, this requires the cultivation of a keen sensitivity to the ways one&#8217;s own institution misuses their power; and above this, a willingness to do something with the moral discomfort this creates in oneself, to grapple with its inconvenient implications. This means:</p><ol><li><p><em>Being sceptical of institutional authority.</em> It is not sufficient to say that one is trustworthy because one has followed the norms and practices of Trustworthy AI. One should develop a habit of asking, &#8220;Whose interests was this norm or practice established to protect?&#8221;, &#8220;Who does following this norm or practice make me more trustworthy to?&#8221;, and &#8220;Who does it make me less trustworthy to?&#8221; </p></li><li><p><em>Reflecting on the powers the institution has, as well as the powers the practitioner has by virtue of being within an institution, and how they play out in trust relations.</em> [This could be seen as a variation on taking epistemic responsibility seriously, as explored in the <a href="https://open.substack.com/pub/trustbranknowles/p/re-politicising-trustworthy-ai?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">previous post</a>.] Potter writes, &#8220;When we fail to address adequately harms arising from misuses or abuses of discretionary power within such relationships, we leave open the possibility that, in worrying about some harms, we are failing to notice other important ones&#8221; (p. 40). She cites Brody&#8217;s (1992) work on power within clinical settings, which highlights the types of power medical professionals hold over others. These include epistemic authority (i.e. that they are trained in the knowledge and techniques of healthcare), charismatic power (i.e. that they possess personal characteristics that can influence patient response, such as friendliness), and social power (i.e. that they are granted with decision making authority in medical matters). <br>A similar accounting should be done of the power wielded by AI companies/regulatory bodies, and by any given AI practitioner. Presumably epistemic authority factors in, linked to the accreditation I critiqued earlier; as does social power in being granted the authority to design technologies with societal impact. There is also, above this, a special status bestowed on AI technologies (due to their geopolitical import) that is worth reckoning with. There is some power in the invisibility of the AI practitioner&#8212;sort of the inverse of charismatic power, whereby it is very difficult for a person to identify who is responsible for what the AI has done to them. And there would appear to be a kind of testimonial power arising from the hegemonic conventions of technical language, that only those who speak this language are entitled to opinions about AI. <em>I would be delighted if people could add to this list in the comments.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner/comments"><span>Leave a comment</span></a></p></li><li><p><em>&#8220;&#8230;pay particular attention to the ways concealment can be used to regulate power&#8221; in the institution. </em>A person who is trustworthy will also have related virtues, like honesty. (I will discuss this at greater length in a future post.)<em> </em>It is important that one is honest in ones&#8217; own communications, for example feeding concerns about untrustworthiness up the chain even when it might lead to blame or outrage. Equally, it is important to push back against deceptive communications which hide important concerns from the public so as not to provoke distrust. </p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/being-a-trustworthy-ai-practitioner?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><h4>Good institutions</h4><p>Still, there remains the frustrating matter of there being deficiencies in the institution itself that make being trustworthy difficult for the individual. Potter argues that &#8220;full virtue is difficult, if not impossible, to exhibit when the institutions are not themselves virtuous&#8221; (p. 55). </p><p>An AI practitioner who is committed to being trustworthy must actively resist the ways in which their institution is untrustworthy. At any given point, this institution could be the company one works for, the larger collective establishing best practice for AI, and/or the AI regulatory apparatus; and these institutions can be untrustworthy in the sense that they break trust (e.g. by causing harm), or in the sense that they do not allow for those working within them to be fully trustworthy. </p><p>I like the way Potter frames this dilemma as an opportunity for character building:</p><blockquote><p>&#8220;Just as social structures may impede our ability to act in a trustworthy manner, so they can provide an alternative focus for our cultivation of a trustworthy character. We may not be able, in some situations, to do what we should with regard to someone who has trusted us, but we can do what we can toward others, and we can work to change the problematic social structures&#8221; (p. 56-7).</p></blockquote><p>So what can AI practitioners do to resist this untrustworthiness?</p><ol><li><p><em>Prioritise others&#8217; trust. </em>In certain circumstances, being virtuous might entail open or covert defiance of company policy if that policy breaks the trust of those most vulnerable. There is a risk, here, of being terminated; someone might reasonably decide that this strategy would be counter-productive to their overall aim, and I am not going to tell anyone that their practical considerations don&#8217;t matter. But there is a more subtle form of resistance that is easier to enact: to not <em>only</em> provide assurances to the institution, but to also provide assurances to those that are vulnerable. Insisting on having to supply these additional assurances may create greater awareness of institutional untrustworthiness.</p></li><li><p><em>Ask the awkward question. </em>I saw that Microsoft recently named Trevor Noah as their Chief Questions Officer. I strongly support the function of asking questions, because questions spark realisation and change. But asking questions should be everyone&#8217;s job&#8212;it certainly shouldn&#8217;t be seen as the purview of a single, anointed individual. Every individual that makes a habit of asking &#8220;Why don&#8217;t some people trust our AI?&#8221; helps establish self-examination as the norm.</p></li><li><p><em>Build alliances. </em>One of the scariest things about standing up to power is the thought of doing so alone. Potter offers a refreshingly practical suggestion that all AI practitioners can take up: to &#8220;prepare for those times when one may need to resist [by forming] alliances with co-workers and like-minded [individuals] so that one does not end up in the position of resisting without any support from others&#8221; (p. 87).</p><p></p></li></ol><h3>Final thoughts</h3><p>I want to end this post with the importance of striving, rather than perfection. As Potter reminds us, </p><blockquote><p>&#8220;We cannot expect too much heroism of one another. This world requires compromise and negotiation. But it matters to our moral character&#8212;and to the future of more equitable, more compassionate communities and institutions&#8212;whom it is that we are negotiating with, where those compromises are being drawn, and who is getting sacrificed as a result&#8221; (p. 88). </p></blockquote><p>While we cannot be trustworthy to everyone all the time, when practitioners recognise the institutional pressures that make them less trustworthy, they are significantly less likely to appease the institution to the detriment of others. Once seen, it becomes possible to imagine small and big ways of resisting, of enhancing one&#8217;s own trustworthiness despite the institution, and of promoting change within the institution. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share Practicing Trustworthy AI by Bran Knowles&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share"><span>Share Practicing Trustworthy AI by Bran Knowles</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Re-politicising trustworthy AI]]></title><description><![CDATA[Practitioners' responsibility to oppressed individuals]]></description><link>https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Thu, 07 Aug 2025 08:03:35 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my <a href="https://open.substack.com/pub/trustbranknowles/p/why-dont-some-people-trust-my-ai?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">last post</a>, I made the case that trust of AI cannot be won by circumventing the crucial matter of trustworthiness of those responsible for AI&#8217;s development, deployment, and regulation. A Trustworthy AI ethics that centres principles and metrics is a poor substitute for the institutionalised practice of trustworthiness as a matter of moral character.</p><p>Now I will turn to the task of articulating the features of trustworthy AI from a virtue ethics perspective on what it means to be a trustworthy individual, as argued by Nancy Nyquist Potter in <em>How Can I Be Trusted?</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3648" height="5472" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:5472,&quot;width&quot;:3648,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;text&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="text" title="text" srcset="https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1591631651768-24788b695600?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxfHxvcHByZXNzaW9uJTIwY2FyZXxlbnwwfHx8fDE3NTQ0MTEwOTl8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Sam Leventhal</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p></p><h3>Requirements of trustworthiness</h3><p>Potter offers ten requirements of trustworthiness. I have grouped these together to explore cross-cutting themes across a series of posts. Here, I will explore requirements numbered 1, 2, and 7, respectively, in Potter&#8217;s list, all of which pertain to how power imbalances contribute to problems of trust and what that necessitates with regards to trustworthiness. I consider below how these requirements might be incorporated into the practicing of trustworthy AI. </p><h4><em>&#8220;That we give signs and assurances of our trustworthiness.&#8221;</em></h4><p>Potter describes trust as a &#8220;process of induction&#8221;, where one projects into the future what one anticipates happening based on prior experience. This means that those with more experience of broken trust&#8212;such as those oppressed and marginalised by inequitable systems, and/or those with relational trauma&#8212;will find it harder to assume trust in the future. </p><p>Such &#8220;learned strategies of distrust&#8221; (p. 19) are rational, even if they may not generalise correctly to every context. (Sometimes trust is not broken as anticipated.) Seeking to change this perspective, however&#8212;as one might when focused on supporting &#8220;reasonable risk-taking&#8221; of the trustor&#8212;is an act of erasure, a denial of that person&#8217;s reality. Greater trust is not a realistic solution for individuals who know all too well the likelihood of their trust being betrayed. And so, the very fact that individual dispositional differences in trustingness are shaped by &#8220;cultural, material, and ideological forces&#8230; highlights the importance of shifting questions of trust and distrust to those of trustworthiness&#8221; (p. 19).</p><p>Potter also argues that because expectations of broken trust can arise from real patterns of exploitation and oppression, the burden is on &#8220;persons with more privilege or power&#8221; to do the work of &#8220;overcom[ing] the disenfranchised person&#8217;s disposition to distrust&#8221; by, first, being deserving of trust, and second, &#8220;[giving] assurances which indicate a trustworthy character&#8221; (pp. 17, 20). </p><p>In <a href="https://krvarshney.github.io/pubs/KnowlesFRV_facct2023.pdf">my academic writing</a>, I have tried to sensitise those working on the &#8220;problem&#8221; of trust of AI to the idea that distrust of AI is a legitimate response to the structural violence it amplifies, accelerates, and sanitises. We attend to trust when we attend to this structural violence&#8212;i.e., to the ways that AI is untrustworthy, to some more than others.</p><p>But it&#8217;s also important to point out the paradigmatic feature of AI, that of it being wielded by the powerful over the less powerful. I have noticed a kind of exasperation at people who distrust AI, and little acknowlegment of the burden of responsibility in this relationship: that distrust reflects a failure to show people that/how AI is trustworthy. </p><p>There&#8217;s no recipe for how to provide these signs and assurances, because what you&#8217;d need to provide depends entirely on the particular vulnerabilities pertinent to a given AI. But what this trustworthiness requirement teaches us is that being trustworthy is not enough to earn trust; <em>trustworthiness must be exhibited</em>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai/comments"><span>Leave a comment</span></a></p><h4><em>&#8220;That we take epistemic responsibility seriously.&#8221;</em></h4><p>We might say that properly dis/trusting requires of the trustor that they are suitably curious and make an effort to understand what makes a potential trustee trustworthy or not. If we are centring questions of trustworthiness, however, we would recognise that &#8220;being trustworthy also requires epistemic effort&#8221; (p. 27). According to Potter, this is chacterised by &#8220;active engagement with self and others in knowing and making known one&#8217;s own interests, values, moral beliefs, and positionality, as well as theirs&#8221; (p. 27).</p><p>The implication for those developing AI systems is two-fold:</p><ol><li><p><em>Reflect on one&#8217;s own privileges to avoid reflexively reinforcing inequitable dynamics and perpetuating distrust.</em> Here we might directly adopt Potter&#8217;s suggestion to consider: &#8220;how does one&#8217;s situatedness affect one&#8217;s relation to social or economic privileges? How do one&#8217;s particular race and gender, for example, affect relations of trust with diverse others? In what ways do one&#8217;s values and interests impede trust with some communities and foster it with others&#8221; (p. 27)? Such questions might lead, for example, to AI systems that are much more cautious in their categorisation of people due to an awareness of other systems of domination that individuals are subjected to.</p></li></ol><ol start="2"><li><p><em>Represent values and beliefs explicitly in the interface.</em> [Shout out to <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Rachel Maron&quot;,&quot;id&quot;:56467999,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!vnVP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3d0d4a7-e79b-48c6-8b17-30c7fcc9068c_1024x1024.jpeg&quot;,&quot;uuid&quot;:&quot;82bdf24b-2471-411c-858f-961661a476d6&quot;}" data-component-name="MentionToDOM"></span> for this portion, as I draw on the exchange we had in the comments for an <a href="https://open.substack.com/pub/trustbranknowles/p/what-can-the-doppelganger-help-us?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">earlier post</a>.] There is a tendency, when treating the earning of trust as a PR exercise, to find ways to soften negative reactions to systems. But being upfront with users about the various trade-offs implicit in the design of the system, and how a particular problem is being conceived and addressed from the standpoint of the individuals involved, creates helpful &#8220;trust friction.&#8221; Trust friction surfaces the specific requirements for being trustworthy (what it is that users are not comfortable with), and assuming a company is reasonably responsive to distrust as a negotiation signal, this will move the company/system towards improved trustworthiness over time. Being explicit about these trade-offs also makes those developing these systems vulnerable to appraisal by those responsible for regulating AI, creating additional incentive to be trustworthy in the myriad (if still limited) ways that are legally actionable.</p></li></ol><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h4><em>&#8220;That we recognize the importance of being trustworthy to the disenfranchised and oppressed.&#8221;</em></h4><p>Trustworthiness is situated in relationships, which means that it is possible to be trustworthy to some people and not others. A person who is committed to the virtue of trustworthiness will face moral dilemmas where they must choose whom to be trustworthy to and whose trust to betray. Potter&#8217;s position is that in such instances we ought to &#8220;take as a primary consideration those who are already vulnerable in relation to dominant structures, in general, and to us, in particular&#8221; (p. 29). This is in keeping with an understanding of trustworthiness that is characteristically &#8220;nonexploitative and nondominating&#8221; (p. 29).</p><p>Again, being oriented toward trust (rather than trustworthiness) would have us make a different calculation. Typically it is those who are relatively privileged that have true discretion in their adoption of AI, whereas those less privileged tend to be forced into relationship with AI, in many cases with systems being used on them to exert control. But the tendency to view the ability to exercise distrust (i.e. to reject AI) as a reason for attending to distrust contributes to the testimonial injustice oppressed people already experience, with some people&#8217;s claims of untrustworthiness taken seriously but not others&#8217;. </p><p>For trustworthy AI, this suggests:</p><ol><li><p>The importance of reflecting on how &#8220;whose complaints are taken seriously intersect with social and cultural markers of worth and difference&#8221; (p. 77); i.e. whose distrust of AI are we responsive to, and why?</p></li><li><p>The need to cultivate genuine caring for the concerns of those with the least discretionary power. </p></li></ol><p>The fact that disenfranchised people may more readily give over trust (well, reliance) to AI due to practical constraints&#8212;<em>particularly</em> in light of their increased likelihood of dispositional distrust (see the first requirement explored in this post, above)&#8212;should focus practitioners on how they might be trustworthy to these people most of all.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/re-politicising-trustworthy-ai/comments"><span>Leave a comment</span></a></p><h3>Final thoughts</h3><p>I want to end by echoing the brilliant Ruha Benjamin. In her recent Tanner Lecture, she argued that while AI purports to promote efficiency and progress, we should train ourselves to ask the questions, &#8220;Efficient at what? Progress for whom?&#8221; </p><p>Likewise, I want to propose that the overarching lesson of this post is that when we hear the phrase &#8220;trustworthy AI&#8221;, and when we are tempted to use it ourselves, we need to get better at asking, &#8220;In what way is it trustworthy?&#8221; and &#8220;Trustworthy for whom?&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA["Why Don't Some People Trust My AI?"]]></title><description><![CDATA[Introducing a virtue theory of trustworthy AI]]></description><link>https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Fri, 01 Aug 2025 11:46:55 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometimes trust of an AI is about the technology: do I predict I can use it to achieve my desired ends, or that it will perform the specified task correctly, etc.?</p><p>Sometimes concerns about protection of <em>other things</em> one values enter in: do I believe that the information I provide to the AI in order to make it work will be handled with care, that my privacy will not be compromised, that I will retain my agency, etc.?</p><p>Sometimes trust of an AI is coloured by one&#8217;s feelings about <em>other AI</em>: can I trust that this AI is different to AIs that I have seen cause harm, do I think AIs are implicated in the enhancement/diminishment of certain things I value, etc.?</p><p>But sometimes it&#8217;s fundamentally about <em>character</em>: do I believe that those developing, deploying, and regulating AI care about what matters to me, and that they can be counted on to protect these things because that is the sort of people they are?</p><p>For this post I will introduce my thesis that <em>insufficient attention to trustworthiness as a virtue contributes to distrust of AI</em>; and that trust is supported <em>at all of these levels</em> when we refocus &#8220;trustworthy AI&#8221; on the individual moral responsibility of those developing, deploying, and regulating this technology. </p><p>I will be drawing on the book <em>How Can I Be Trusted?</em> by Nancy Nyquist Potter as I begin to articulate a virtue theory of &#8220;trustworthy AI&#8221;. This will scaffold my approach to <em>trustworthy AI as a practice</em>, i.e. a way that AI practitioners show up in the world.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="5184" height="3888" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:3888,&quot;width&quot;:5184,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;black and white labeled bottle&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="black and white labeled bottle" title="black and white labeled bottle" srcset="https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1592169293959-7d35fd688c30?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyfHxldGhpY3MlMjBnb29kJTIwcGVyfGVufDB8fHx8MTc1Mzk2OTI0MHww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Gio Bartlett</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h3>The neglect of trustworthiness</h3><p>Potter&#8217;s book is remarkable for its soul-searching examination of what it means to be trustworthy. She notes not only the surprising lack of philosophical engagement with people&#8217;s lived experience of trust (typically messier than game-theoretic or contractual views allow for), but also a preoccupation with the avoidance of betrayal of trust. This tends to prompt questions of who to trust, on what basis, and with what assurances, all of which are oriented toward &#8220;reasonable risk-taking&#8221; of the trustor. What is missing from this are important questions about &#8220;responsibility-taking&#8221; of the trusted party. Potter observes that, especially when it comes to those in positions of power, people rarely ask themselves the self-examining question, </p><blockquote><p>&#8220;Why don&#8217;t some people trust me?&#8221; (p. xi). </p></blockquote><p>It might appear that trustworthy AI is an exception to this general neglect of trustworthiness, in that a whole collective of researchers, practitioners, and policymakers have busied themselves with the elaboration of principles that should be followed and metrics that should be used to evaluate the trustworthiness of AIs. But predetermining principles and metrics of trustworthiness shows a concern with pre-empting (only) the distrust-provoking matters that the technical elite deem valid. This is different from a true self-examination.</p><p>And the absence of true self-examination has consequences. Currently, there is little understanding of how any of the things we can measure about an AI&#8217;s &#8220;trustworthiness&#8221; might translate into trust because technical folk assume they know (or know better). As one might imagine, this can lead to the coming apart of &#8220;trustworthy AI&#8221; from actual trust of AI. </p><p>In contrast, when you treat any and all reasons for distrust as valid, and open yourself up to a process of self-examination, you naturally ask the key question: &#8220;How can we actually meet the trust needs of the people that are expressing these views?&#8221; This reflects the disposition of someone who takes their responsibility to be trustworthy seriously.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>The inadequacy of a rules-based AI ethics</h3><p>Alas, I have given up trying to follow all the developments in AI regulation. It&#8217;s exhausting, confusing, but also depressing. The principles they supposedly enshrine are always still vague, and the regulations appear to do more to protect AI companies from having to be decent than they do to encourage decency. They are, after all, developed with Big Tech at the table, and are designed to help these companies make money in the hope that their wealth trickles down to the rest of society. (Is that too cynical?)</p><p>I actually think regulation is important, and has a place. But fundamentally, what I think is misguided about pinning so much on a regulatory ecosystem, at least when it comes to promoting trust in AI, is that it is being built in an attempt to bypass the crucial matter of trustworthiness. </p><h5>Deep trust</h5><p>Trust can be deep, or it can be shallow. The difference is the extent to which you think someone is committed to being trustworthy. As Potter explains, you may trust that you will be well-treated because you understand that certain sanctions prohibit someone treating you badly. Yes, this is a kind of trust; but if you&#8217;re assuming this is the <em>only </em>reason for someone treating you well, you have to be vigilant against trusting them outside of these very particularly defined conditions. </p><p>A deeper form of trust is predicting that you will be well-treated because you know that the other person has good will toward you (they wouldn&#8217;t want to harm you). You could assume, here, that even when an opportunity presents itself to act badly without sanction there is little appetite to do so.</p><p>But the deepest form of trust, Potter argues, is when &#8220;a prediction of being well-treated&#8230; is grounded in a belief not only in the other&#8217;s good will toward oneself but in a belief that the other&#8217;s good will is part of a more general disposition that extends beyond the context of this particular relationship&#8221; (p. 5). In other words, the reason your trust is so deep, and can be extended to the widest set of contexts, is because being trustworthy matters so much to this person that they would remain trustworthy <em>even when it wasn&#8217;t in their interest to do so</em>, because being trustworthy is the thing that matters to them most. </p><h5>AI regulation is a cat and mouse game</h5><p>How much do you trust that regulators understand the technology well enough, and can move fast enough, to protect you from harm? Assuming the mouse is determined to do whatever it can to get the cheese (shout out to <span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;Farida Khalaf&quot;,&quot;id&quot;:47192869,&quot;type&quot;:&quot;user&quot;,&quot;url&quot;:null,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f75874ef-962e-4ca6-a589-fbb84a130296_500x500.png&quot;,&quot;uuid&quot;:&quot;219fa398-18cf-4d2e-872f-ccb4a3fdf795&quot;}" data-component-name="MentionToDOM"></span> and <a href="https://fafi25.substack.com/p/the-mouse-that-ate-the-world-how">her recent post</a>), and is willing to cause damage in the pursuit of this goal, it&#8217;s impossible to adequately constrain its untrustworthiness.</p><p>If what is taken for granted here is that Big Tech cannot be expected to have any good will toward people, that the best we can hope for is that we create enough mousetraps to catch them when they are tempted, then there is no grounds for trusting the AI companies as they innovate and act in the world&#8230; even if one might sometimes be able to (shallowly) trust a given AI. (Though I would argue that if you really felt the company was untrustworthy, trusting the product is probably unwise. Tesla car panels falling off on the highway would seem to illustrate my point nicely.)</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h5>Commitment to principles is not enough</h5><p>Potter&#8217;s concern is not so much with regulatory constraints as with the limits of deontological ethics which would seek to define principles for good action. She argues that believing someone is trustworthy because they are committed to a set of principles falls short of &#8220;what we tend to look for when evaluating whether or not we can trust someone&#8221;&#8230; namely their <em>trustworthiness</em>, &#8220;that is, a disposition that is responsive to others in their particularity and not just an impartial adherence to rules&#8221; (p. 6). She adds, &#8220;An attitude of indifference to particular persons does not foster a great degree of trust even if &#8216;right actions&#8217; are performed&#8221; (p. 6). </p><p>What she is drawing attention to is something too often missed&#8212;and its missing is a source of distrust for so many with lived experience of marginalisation&#8212;that building an ethics around universal principles (as in the guidelines for ethical AI by the European Commission&#8217;s High-Level Expert Group on AI) tends to be &#8220;accompanied by an indifference to feelings for others or an impersonal and impartial stance&#8221; (p. 6). And the reason this is harmful is because &#8220;the impartial point of view &#8216;masks ways in which the particular perspectives of dominant groups claim universality, and helps justify hierarchical decision-making structures&#8217;&#8221; (p. 6, citing Young, 1990). </p><p>And this is precisely what I was getting at when saying that &#8220;trustworthy AI&#8221; is a case of being tuned in to respond only to the set of concerns that tech elites deem valid. <a href="https://krvarshney.github.io/pubs/KnowlesFRV_facct2023.pdf">I have argued elsewhere</a> that those creating such principles suffer from ignorance of their perspective on what makes AI trustworthy as being a <em>particular</em> view, specifically a &#8220;view-from-the-top&#8221;: </p><blockquote><p>&#8220;from this vantage the benefits of AI are clear and the harms of AI are abstract, largely separate from lived experience but a potential threat to an ambitious technology agenda. This view-from-the-top has thus informed highly technocentric principles for &#8216;trustworthiness&#8217; which&#8230; obscure and facilitate structural violence by shifting attention away from how the very foundations of AI are inherently extractive and prone to reproducing and, at the same time, amplifying extant inequitable social structures through the logic of categorization and simplification...&#8221; (Knowles et al., 2023).</p></blockquote><p>So while the presence of principles is better than nothing&#8212;it indicates an understanding that there is a moral responsibility to work through and is more likely to lead to a prediction of being treated well&#8212;what Potter suggests, and I&#8217;m inclined to agree, is that &#8220;it is not likely to be trust at a very deep level if the good treatment is based on a commitment to universal principles and accompanied by an indifference to feelings for others or an impersonal and impartial stance&#8221; (p. 6).</p><p>What I have come to suspect is that, while in the short term companies may benefit from shallow trust, the longterm consequence of inattention to deep trust is that feelings of discomfort accumulate in the psyche over time and start to appear in ostensibly less &#8220;rational&#8221;, and more affectively loaded, totalising distrust of AI. We feel uneasy without the assurances of knowing we are taken care of by others in this world; we recognise that what has changed around us is the emergence of AI; so we distrust AI. It&#8217;s a working hypothesis. Thoughts welcome!</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai/comments"><span>Leave a comment</span></a></p><h3>A virtue ethics of trustworthy AI</h3><p>The position that Potter arrives at is that trustworthiness is a matter of <em>character</em>. When determining whether someone is trustworthy, we are attempting to &#8220;reach a sense of the whole person&#8221; (p. 2; quoting Govier, 1991). While it might be more correct from a risk-avoidance perspective to take care to specify in which contexts a person can or can&#8217;t be trusted, we tend, instead, to have an attitude toward a person based on our sense of how we imagine they tend to operate in the world. But I don&#8217;t think this is laziness, exactly; or even wrong. Often we don&#8217;t know the full extent of what we are entrusting to another. (Consider, for example, how difficult it can be to comprehend precisely how it might matter to one&#8217;s privacy to consent to certain terms and conditions; <a href="https://dl.acm.org/doi/abs/10.1145/3609329">an issue I have written about elsewhere</a>.) We make judgments about character because this actually provides a reasonably good basis for predicting good behaviour even in contexts we haven&#8217;t anticipated arising.</p><p>To be able to judge trustworthiness&#8212;&#8220;that she can be counted on, as a matter of the sort of person she is, to take care of those things with which we are considering entrusting her&#8221;&#8212;we must know something of the person&#8217;s &#8220;values, commitments, and loyalties&#8221; (p. 7).</p><p>So&#8230; what does Big Tech stand for? </p><p>Eugh. </p><p>It matters (to trust of AI) that the public faces of so many of these companies are, frankly, so loathsome. Consider <a href="https://www.esquire.com/uk/latest-news/a19490586/mark-zuckerberg-called-people-who-handed-over-their-data-dumb-f/">this exchange between Mark Zuckerberg and a friend</a>:</p><blockquote><p>Zuck: Yeah so if you ever need info about anyone at Harvard<br><br>Zuck: Just ask.<br><br>Zuck: I have over 4,000 emails, pictures, addresses, SNS<br><br>[Redacted Friend's Name]: What? How'd you manage that one?<br><br>Zuck: People just submitted it.<br><br>Zuck: I don't know why.<br><br>Zuck: They "trust me"</p><p>Zuck: Dumb f**ks.</p></blockquote><p>I don&#8217;t think I really need to convince anyone that the values of tech billionaires are... skewed, shall we say, toward profit over all other values. The issue is that it becomes impossible to trust their companies <em>deeply</em> when we know that our interests will come up against, and inevitably lose out to, their quest for profit.</p><p>Moreover, we are aware of the power these companies hold over us. (Is anyone entirely comforted by regulation?) Inconveniently, the fact that this relationship is so asymmetrical is precisely why character matters so much! As Potter reminds us, &#8220;&#8230;when differences in privilege and power exist between us, we may be uneasy about what each other cares about: each sees that the other values some things which she or he sees as either incompatible with or hostile to the things <em>she</em> or <em>he</em> values. Hence, the emphasis is on how willing and able one is to care for those goods others value even when those are not, or do not appear to be, entirely harmonious with the goods one values oneself&#8221; (p. 12). </p><p>In other words, to trust AI, we have to trust that those responsible for it are committed to being trustworthy as a matter of <em>character</em> because they recognise their responsibility to care for us.</p><h5>Living in &#8220;reality&#8221;</h5><p>There is no point fantasising that tech billionaires will &#8220;see the light&#8221; and decide that they must commit to being trustworthy. But I do think there are things that can change without this.</p><p>First, the way we frame and talk about trustworthy AI matters. It sets a moral expectation. When we talk about trustworthy AI in the sterile language of principles and metrics, we invoke the rational brain, implicitly conceding that such matters are subject to reason. How different it would be to assert that such matters are <em>beyond reason</em>, that the importance of being trustworthy is not up for debate! This is about being a decent, trustworthy person in the world because that is <em>everything</em>. And we are a society that will hold people to that expectation. We don&#8217;t celebrate those who get to the top by being untrustworthy; we shame them. (This is the metaphorical stick we might use&#8230; if they care.)</p><p>Couching trustworthy AI in the language of our emotional brain is what can create the emotional motivation to &#8220;behave in ways that are ultimately in our interest and the interest of those within our sphere of care or concern&#8221; (to quote Andrew Westin from his book <em>The Political Brain, </em>2008; p. 102).  (Carrot + stick now.) </p><p>Additionally, it invites people who do feel distrustful to tap into their emotions, potentially contributing to a groundswell of public pressure to be better. (Carrot + double stick.)</p><p>Still, I sense many will be unconvinced. So I&#8217;ll give another reason it matters: these billionaires are at the top, but they are not the ones doing the work. There is so much that individuals within any tech company can do when they are committed to being trustworthy. I will focus on them, because it is in them that I find hope for change. </p><h5>The shift</h5><p>So what I am proposing is a need for a radical reframe of trustworthy AI in terms of virtue ethics, i.e. to focus on the ways that AI practitioners can cultivate trustworthiness and embody that character in their work. This, I suggest, will ripple out into improved trustworthiness and greater trust in the AI products being built (not just that they work as expected, but that they work in ways sensitive to our various vulnerabilities&#8212;the things we value that are entrusted to AI). But it will also begin to instil trust in the culture from which AI springs, through a sort of morality-based professionalisation of the sector, creating a deep trust that spills over into greater trust of AI as a class of technology.</p><p>For now, I will provide a mere sketch of what is distinctive about this framing: </p><ul><li><p>It &#8220;puts dispositions, and not rules, at the center&#8221; (p. xii).</p></li><li><p>It demands that we aspire to being &#8220;fully trustworthy&#8221; (p. xii). This means: </p><ul><li><p>not confining our trustworthiness with respect to some specific good(s);</p></li><li><p>not treating certain goods as supererogatory (&#8220;it is morally permissible to do x and it would be good to do x, but it would not be morally wrong to not do x&#8221;). </p></li></ul></li><li><p>It entails engaging in the reflective practice of considering &#8220;ways in which each of us can enhance proper trust and ease pervasive distrust&#8221; (p. 2). </p></li><li><p>Practicing trustworthiness in this way is the habit of those who have committed to cultivating a trustworthy character.</p></li><li><p>This trustworthy character provides a basis for deep trust of AI.</p></li></ul><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/why-dont-some-people-trust-my-ai?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h3>Conclusion</h3><p>To keep this reasonably digestible, I have stopped short of describing the features of trustworthy AI when we treat trustworthiness as a virtue. This will be the subject of the next several posts.</p><p>For now I hope to have provided a reasonably compelling argument for focusing more seriously on what it means to be trustworthy, and demanding that practicing trustworthiness is integral to the development, deployment, and regulation of AI.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[5 Lessons of the doppelganger]]></title><description><![CDATA[What distrust is telling us to do differently with AI]]></description><link>https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Fri, 25 Jul 2025 15:41:07 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For my first three posts, I did a deepdive on Naomi Klein&#8217;s book, <em>Doppelganger</em>. This is because the effect of Klein&#8217;s work is, without fail, to reignite my desire to be a <em>writer&#8212;</em>to put the world in order through words. So I finally decided to act on that longing by starting on Substack.</p><p>Today I wanted to write something a little more tangible (less indulgent, perhaps): a set of insights to inform AI practice based on what I explored through the lens of <em>distrust of the doppelganger</em>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="3834" height="2556" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2556,&quot;width&quot;:3834,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;man in gray polo shirt&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="man in gray polo shirt" title="man in gray polo shirt" srcset="https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1608272667943-cbf5ee73c0fa?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwzMHx8ZG91YmxlJTIwZXhwb3N1cmV8ZW58MHx8fHwxNzUzNDU4Mzg3fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Mishal Ibrahim</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><p></p><h4>1. Validate, and act in response to, the feeling of distrust</h4><p>The highest level takeaway from these discussions is that, while trust may at some times be best understood as a belief predicated on rational modes of thinking, trust is also (and I&#8217;d suggest, primarily) an <em>affective attitude</em>. Failure to appreciate this emotional component is what can lead to the appearance of an inexplicable coming apart of trust and trustworthiness in the AI space, where people appear not to trust AI systems that meet the specified criteria for trustworthiness in terms of metrics and assurance processes.</p><p>But where, for example, do the metrics and processes used to ensure a system&#8217;s trustworthiness attend to disgust, resentment, or anxiety? Metrics, at least when presented to the user as evidence of trustworthiness, are the language of belief-based trust, not of emotions. So, too, is process when undertaken principally to demonstrate compliance. Perhaps the reason so many people feel such an intense distrust of AI comes down to a felt sense that those developing and advocating AI lack inquisitiveness about their <em>feelings of distrust</em>; that they don&#8217;t see them as valid, but instead view them as a misfiring of logic.</p><div class="pullquote"><p>Emotions are not noise in the system; they are primal inputs to dis/trust.</p></div><p>Negative emotions are a body&#8217;s protective mechanism for responding to threat. This means that those developing AI systems should treat distrust as a signal that the AI system is threatening in some way(s). In practice, this looks like a) doing something to meaningfully address the threat; b) communicating that the threat has been taken seriously and what has been done about it; and c) checking back in with the feelings to see whether it has stopped alarming.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/subscribe?"><span>Subscribe now</span></a></p><h4>2. Create the right expectation, expose the (true) <em>merwelt</em></h4><p>One of the things we should see if were are taking emotions seriously is that people are weirded out by technology that presents itself as &#8220;you, only better&#8221;. This is a threatening premise to begin with!&#8212;one that potentially invites defensiveness, too, which I did not yet discuss.</p><p>The main trust issue, however, is that this premise is a lie. The attempt to mimic intelligence, to hide the machinic processes, is deceptive. Catching someone in a lie can destroy trust; here, the mask is bound to slip, the lie exposed, and one sees that the machine is &#8220;intelligent&#8221; only in a very limited (disappointing) sense. But what Richard Harper is really arguing is that AIs designed to mimic intelligence are setting users up for the wrong interactional dynamic&#8212;a kind of wrong expectation that can only lead to distrust when the AI does not do what one anticipates. </p><p>He proposes a solution that begins with abandoning the guiding metaphor/aspiration of Artificial Intelligence: encouraging users to think of what happens inside of an AI system as <em>merwelt</em>, rather than consciousness or intelligence. &#8220;[W]e need to make our <em>consciousness </em>of AI different because AI is other in a variety of ways, and distinct from us in terms of what <em>its </em>consciousness might be, and indeed so different that the term consciousness might not be helpful&#8230; The more we recognize that there are myriad types of AI and myriad merwelts the less will we be fooled into thinking its &#8216;understandings&#8217; are a version of our own&#8221; (p. 22). </p><p>There are ways designers can expose the <em>merwelt </em>(Harper provides several suggestions for doing so in ChatGPT)&#8212;after all, its concealment has been an intentional sleight of hand, an attempt to make AI seem more magical than it is. But hype is profitless in terms of trust, both because it misleads the user in how to succeed with the technology which leads to performance breakdowns, and because those pushing demonstrably false narratives come to be viewed as snake oil salesmen. </p><p>Instead, helping users set realistic expectations for AI as a tool, with grammars of action that can be mastered for defined ends, creates better conditions for trust to grow. This work can be understood as the practice of embodying &#8220;rich trustworthiness&#8221;, that is, clearly signalling for which tasks/ends a person can/not rely on AI. </p><div class="pullquote"><p>The AI equivalent of &#8220;rich trustworthiness&#8221; is designing interfaces to signal precisely what a user can rely on it to do or not do, without pretense of extraordinary powers.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h4>3. Create trust friction to surface opportunities to improve</h4><p>I am definitely not suggesting that the <em>merwelt </em>is innocuous and that exposing it would allay all distrust. I&#8217;m saying it&#8217;s at least an honest beginning for, not just better human-AI interactions, but also for a conversation about what we value and how that gets embedded in our technologies.</p><p>What my musings on the doppelganger have attempted to show is that some of what we are reacting to is the exposed logics of a dark ideology. Particularly alienating, I would suggest, is its implicit asserting of the supremacy of the values of efficiency and optimisation, and its culminating in the sterile sorting of people into categories of worthy vs unworthy. </p><p>While for practical reasons I might like to get painful things done as efficiently as possible, I wouldn&#8217;t rank efficiency higher on my list of values than things like compassion, honesty, care, connection&#8230; Really, it&#8217;s towards the bottom of my list, at least when I am in a reflective (rather than panicked) mode.</p><p>Optimisation, on the other hand, is inherently threatening to me and so many others who would be subject to culling. In my case, having a disability is a liability in a culture hellbent on the project of total optimisation. As Klein herself says in exploring the darkness of optimisation, &#8220;The very idea that humans can and should be &#8216;optimized&#8217; lends itself to a fascistic worldview&#8230; If you are safe because your immune system is strong, it can flip to mean others are unsafe because they are weak. If you are optimized, others are, by definition, suboptimal. Defective. Next door to disposable&#8221; (p. 187).</p><p>What happens when the <em>merwelt </em>is exposed&#8212;whether intentionally or through slippage&#8212;is that it jolts a reaction. When that reaction is distrust, what follows should be real debate about the provoking matter. We can, for example, begin to discuss the merits of efficiency, of optimisation, of sorting when we are invited to see that this is what the AI is doing. Those developing AI systems should not attempt to soften reactions by hiding what is at stake&#8212;after all, do they believe in what they are doing, or do they not? If, ultimately, what will cause distrust of your AI system is the logics upon which it based, better to deal with that early in the pipeline than try to massage distrust later on. </p><div class="pullquote"><p>Creating trust friction as an intentional part of design practice allows for a negotiation of the expectations that cement a trust relationship.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/subscribe?"><span>Subscribe now</span></a></p><h4>4. Open up the work of imagining as a collective project</h4><p>Now, to throw a spanner in the works, I don&#8217;t believe that Big Tech considers the ideological premises of AI to be up for debate. I think this in itself contributes to a deep yearning for things to be different, far beyond trust friction. Interestingly, according to Freud, it is this kind of yearning itself that underlies the enduring motif of the doppelganger. I&#8217;ll quote Klein again: </p><p>&#8220;Freud speculated that the figure of the doppelganger recurs in the culture in part because the idea of there being duplicate selves stands in for the vast potentialities that our lives hold. We are the product of choices&#8212;made by us, and made by others. But, Freud wrote, those never are the only choices available. There are also &#8216;all the possibilities which, had they been realized, might have shaped our destiny, and to which our imagination still clings, all the strivings of the ego that were frustrated by adverse circumstances, all the suppressed acts of volition that fostered the illusion of free will&#8217;&#8221; (p. 334).</p><p>I am suggesting that so much of our distrust of AI arises from the <em>frustrated strivings of the ego</em>: despair at the shape of AI as it has come to be when compared against the limitless possibilities of human imagination, and a loss of hope that it could be any different given our current techno-politics. In this context, distrust is one of the few forms of dissent available to us, and it expresses a multitude of disappointments and fears. </p><div class="pullquote"><p>Invite dissenters to take part in imagining more uplifting expressions for technology that can re-inspire faith in humanity as a basis for widespread trust in technology.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><p>If you have taken away any new insights that I&#8217;ve not listed here, go ahead and let me know in the comments!</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:&quot;button-wrapper&quot;}" data-component-name="ButtonCreateButton"><a class="button primary button-wrapper" href="https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger/comments"><span>Leave a comment</span></a></p><h3>One more lesson from <em>Doppelganger</em>&#8230;</h3><p>In her book, Klein studies the different ways the theme of the doppelganger have been explored in pop culture. She returns repeatedly to Phillip Roth&#8217;s book, <em>Operation Shylock</em>, pulling on a number of its threads, but in particular borrowing from it the term &#8220;pipikism&#8221;. The term derives from the Yiddish word for belly button, but in Roth&#8217;s words refers to &#8220;the antiragic force that inconsequencialises everything&#8212;farciacalizes everything, trivializes everything, superficializes everything&#8221; (p. 145).</p><p>This, Klein sees, is what Far Right politics does to the issues that matter to the Left. This happens when, for example, the Far Right calls everyone and everything &#8220;fascist&#8221;. Fascism has a specific, serious meaning, but it has been appropriated by the Far Right to mean little more than &#8220;a person who is against us&#8221;; and eventually, serious, scholarly people who really do mean fascism find themselves backing away from using the word for it now being tainted by its association with the lunatic fringe. She asks, &#8220;Once an idea has been pipiked, can it ever be serious again&#8221; (p. 145)?</p><p>I raise this because this is what appears to be happening to the words trust and trustworthiness now that they have been commodified by the tech industry. I find myself on the defensive, having to begin my serious inquiries about trust and trustworthiness by explicitly creating distance from &#8220;trustworthy AI&#8221; as it is wielded in technical and regulatory circles. <em>No, I don&#8217;t mean the set of vague principles or spurious metrics, I mean &#8216;Is AI worthy of trust&#8217;! </em>And the same is true with trust. Whereas industry would seem to be satisfied to equate use of an AI with trust of an AI, I want to scream, <em>That&#8217;s reliance! Do they TRUST it??</em></p><p>Trivialising trustworthiness is a great way to get people to think you are not seriously invested in earning their trust. And trivialising trust signals an even deeper disconnect, a true flaunting of power&#8212;effectively communicating, &#8220;you need us, we don&#8217;t need you.&#8221;</p><p>So if I could add a bonus lesson to this list, it would be this:</p><div class="pullquote"><p>Stop pipiking trust and trustworthiness!</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/5-lessons-of-the-doppelganger?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><h3>Next time</h3><p>This concludes me constantly talking about Naomi Klein. Next week, I will begin a new multi-part series exploring what it means to develop a practice of being trustworthy, reflecting the book <em>How Can I Be Trusted</em> by Nancy Nyquist Potter into the world of AI development. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[What can the doppelganger help us understand about distrust of AI?]]></title><description><![CDATA[Part 3]]></description><link>https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-a17</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-a17</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Fri, 18 Jul 2025 15:44:51 GMT</pubDate><enclosure url="https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this newsletter, I am continuing on this theme of understanding affectively charged distrust of AI by exploring it as aversion to &#8220;the doppelganger&#8221;.</p><p>So far I&#8217;ve argued that some of our distrust of AI takes the form of <em>disgust</em> at encountering a technology ostensibly made in our image, but which in doing so reflects back to us aspects of ourselves we may not like to stare in the face. I have formulated this as distrust of <strong><a href="https://open.substack.com/pub/trustbranknowles/p/what-can-the-doppelganger-help-us?r=5z1pkv&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false">AI-as-doppelganger</a></strong>. </p><p>I then argued that some of our distrust of AI takes the form of <em>resentment</em> at <strong><a href="https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-ea1">AI mistaking us for our &#8220;digital doppelganger&#8221;</a></strong>. This version of ourselves is &#8220;flattened, reduced&#8221;, which as Naomi Klein argues, is &#8220;easier to confuse with a flattened, reduced version of someone else&#8221; (p. 41). So we become indignant when that misrecognition leads to unfairly adverse decisions about us by AI; but perhaps, too, we resent the flattening/reduction that enabled the confusion in the first place.</p><p>Here I will come from yet another angle to help get to grips with <em>anxiety-driven, conspiratorial</em> distrust of AI. Though it lends itself less easily to a pithy formulation, my thesis is that these fears are displaced anxieties about capitalism. </p><p>It&#8217;s easier to fixate on AI, to see it as the cause of the anxiety we are feeling, than to confront our anxiety that we are implicated in the horrors of capitalism. In doppelganger terms, we are displacing <strong>anxiety about the actions of our &#8220;second body&#8221;</strong> which reaps the rewards of pain sown in the <strong>shadow realms of capitalism</strong>. </p><p>This requires some explaining&#8230; hang in there with me!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw"><img src="https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080" width="4256" height="2832" data-attrs="{&quot;src&quot;:&quot;https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2832,&quot;width&quot;:4256,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;woman in gray crew neck shirt&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="woman in gray crew neck shirt" title="woman in gray crew neck shirt" srcset="https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 424w, https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 848w, https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1272w, https://images.unsplash.com/photo-1630480620666-b850f38829c9?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwxOHx8YW54aWV0eXxlbnwwfHx8fDE3NTI4NTI3Njh8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=1080 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Photo by <a href="true">Alexei Maridashvili</a> on <a href="https://unsplash.com">Unsplash</a></figcaption></figure></div><h2>Getting the feelings right</h2><p>A major component of Klein&#8217;s book, <em>Doppelganger</em>, involved trying to make sense of the paranoic, unhinged chaos of what she calls the Mirror World. This Mirror World is the sinister double of liberal politics, the fascistic flip that has been gaining momentum across the globe. In observing the tactics of folks like Steve Bannon, Klein discerns a strategy for building alliances and converting followers. The Bannon playbook is to pay attention to the issues that liberals should be caring about but are either ignoring or neglecting, and then claiming these as issues owned by the Far Right, creating a safe haven for the disaffected. Klein goes on to say, &#8220;[Bannon] is also taking note of more subtle failings&#8212;the way issues are discussed, the way disagreements are negotiated, the way people are treated by their friends and comrades. Through the one-way glass, he is studying all our hypocrisies and inconsistencies so that he can make a show of doing the exact opposite&#8221; (p. 126).</p><p>The genius of this strategy is that it capitalises on people&#8217;s anxiety by (merely) legitimising the feeling. Of course, the Far Right is no more likely to offer a genuine solution to the problems creating the anxiety. But people feel heard, which is better than being gaslit by those who are supposed to be fighting for these issues.</p><p>There is another Far Right tactic at play, which I will call &#8220;Look over there!&#8221; The angry hoards they lead are crying out for someone to do something about problems created by neoliberal economic policies and/or unfettered capitalism and its perverse incentives. These are not things the Far Right has any intention to address, as they provide the party&#8217;s ultimate powersource (i.e. disproportionate spending power + increasingly angry hoards). So their trick is to redirect anxiety to culture wars issues (p. 285). The reason everything becomes so twisted in the Mirror World is because of the strategic severing of feeling from fact&#8212;what matters is that people are riled up about something that is not <em>the </em>thing (capitalism). </p><p>The conclusion Klein reaches is that all of the bizarre conspiracy theories lighting up Right Wing discourse are, she quotes from Gilroy-Ware, &#8220;a misfiring of a healthy and justifiable political instinct: suspicion&#8221; (p. 243). Conspiracy theorists, she summarises, &#8220;get the facts wrong but often get the <em>feelings </em>right&#8212;the feeling of living in a world with Shadow Lands, the feeling that every human misery is someone else&#8217;s profit, the feeling of being exhausted by predation and extraction, the feeling that important truths are being hidden&#8221; (p. 243).</p><h2>The &#8220;second body&#8221;</h2><p>What fascinates Klein is not really the political machinations that spawn the Mirror World, but rather the psychology of avoidance, the things humans do to avoid seeing &#8220;what we cannot bear to see&#8212;in our past, in our present, and in the future racing toward us&#8221; (p. 322). What makes conspiracy theories so enticing is that they divert anxieties to the doings of a definable &#8216;other&#8217;. The doppelganger formulation Klein favours in the book is that all the work of avoidance (performing, partitioning, and projecting) is to save us from having to face our &#8220;second body&#8221; (our &#8220;true doppelganger&#8221;, as Klein calls it): &#8220;the one enmeshed with wars and whales, the one benefitting from the genocides of the past and adding our little drops of poison to the great die-offs of the future. The second body that perpetually mines the Shadow Lands for its comforts and conveniences. We avoid because we do not want to be bodies like that&#8221; (p. 322).</p><h2>Finally getting to the matter of distrust of AI</h2><p>I want to propose that a great deal of distrust of AI is not about the technology, per se. </p><p>It is easy to mistake distrust as an objection to this or that tool, or this or that technique. Individuals may even convince themselves that AI is the cause of their anxieties, without recognising it as the technical embodiment of an existentially threatening ideology about which we are justifiably anxious.</p><p>I am not meaning to suggest that AI is perfectly harmless. AI causes real, direct and indirect harms to people; and many of these harms are inflicted in the Shadow Lands through the digital poorhouses that use algorithms to manage and punish the poor (see <em>Automating Inequality</em>, by Virginia Eubanks), or on the invisible dataworkers of the Global South we traumatise, or on the people (also disproportionately living in the Global South) ravaged by climate change exacerbated by a rapacious appetite for data and a misguided race to maximise computer processing power (see <em>Atlas of AI</em>, by Kate Crawford). These horrors are not unique to AI, however; they reflect the very predation and extraction required of modern capitalism. What is scary about AI&#8212;the feeling that we get right&#8212;is that it is the current tool for entrenching and scaling up a deeply harmful capitalist logic.</p><h2>Flaunting the mechanics of the oligarchy</h2><p>It is also deeply relevant to our anxieties about AI, I should think, that Tech Billionaires are our culture&#8217;s most visible representation of the inequities in capitalist societies. Klein writes, &#8220;In every case, they take up the mantle of solving the world&#8217;s problems&#8212;climate breakdown, infectious diseases, hunger&#8212;with no mandate and no public involvement and, most notably, no shame about their own central roles in creating and sustaining these crises. Knowing that this kind of unmasked plutocracy can take root in democratic societies without so much as an effort to hide it is like being forced to watch your spouse cheat on your when that is not your kink&#8221; (p. 240). </p><p>Klein&#8217;s point is that this is fuel for conspiracy culture, and that channeling all the negative feelings this conjures up onto frenetic unconverings of evildoings by nefarious villains could be understood, from a place of compassion, &#8220;as some sort of twisted lunge for self-respect&#8221; (p. 240). </p><p>My point is that we are all too eager to villainise Musk or Zuckerberg (believe me, they make it so easy and so tempting!), to put the blame on individual people and individual AIs (again, there&#8217;s so much to hate about so many AIs). But it&#8217;s distraction theatre that stops us from engaging in a real critique of capitalism. And the more we do the dance of avoidance, the more our anxiety grows, and the more we distrust AI for reasons we can&#8217;t (or refuse to) really pinpoint. </p><h2>A grim thought</h2><p>I am led to the rather upsetting conclusion that our leaders&#8217; lipservice to &#8220;trustworthy AI&#8221; is exactly the type of core issue abandonment that energises the conspiratorial Right Wing and grows its ranks.</p><p>People are upset about hugely important things like privacy, justice, climate change, and the legislative response could not have been better designed to alienate than if that was the point. Perhaps this is how we understand the timing of our global fascistic flip.</p><h2>To conclude</h2><p>My goal is not to diagnose our current politics. My goal is to make sense of what can feel at times like AI conspiracy. People appear to be distrusting AI for really weird reasons. Technologists get drawn in to arguments about the facts, trying to explain why people are wrong about the things they&#8217;re afraid of. But what these fact-checkers are failing to understand is that <em>the feelings are right</em>. AI is in many ways the wrong target, but distrust of AI makes sense&#8212;it is our lunge for self-respect in a world that&#8217;s rife with gaslighting. </p><p>&#8220;Look over there!&#8221;</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/subscribe?"><span>Subscribe now</span></a></p><p>As ever, I&#8217;d love for this to spark discussion. Have I gotten it wrong? Have I extended the metaphor too far this time? Or does this help you make sense of something you&#8217;d struggled to articulate?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-a17/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-a17/comments"><span>Leave a comment</span></a></p><p><em>If you enjoyed this newsletter, please click the Like button so that more people can discover it on Substack. And do feel free to share with anyone you think might be interested.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-a17?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-a17?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[What can the doppelganger help us understand about distrust of AI?]]></title><description><![CDATA[Part 2]]></description><link>https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-ea1</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-ea1</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Thu, 10 Jul 2025 11:59:39 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!QDsI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I pick up where I left last week, with further thoughts about the relevance of the doppelganger to the depth of our distrust of AI. I should back up before going forward, and begin by briefly summarising the book inspiring these thoughts.</p><p>Naomi Klein explains that she was compelled to write <em>Doppelganger</em> because she was being perpetually confused with Naomi Wolf&#8230; and it was driving her a little bit <em>mad</em>. In part, this was because &#8220;Other Naomi&#8221;, as she calls her, strays very far indeed from the &#8220;personal brand&#8221; of Original Naomi &#8211; people were attributing Wolf&#8217;s extreme (I&#8217;d say monstrous) Right-wing politics to Klein, a devoted Lefty who stumped for Bernie Sanders. The book, which documents Klein&#8217;s experience of disorienting &#8220;unselfing&#8221; from being twinned with this reverse-politics doppelganger, ultimately leads to profound and frequently beautiful insights into human nature and interdependence. (<a href="https://www.theguardian.com/books/2023/sep/09/doppelganger-a-trip-into-the-mirror-world-by-naomi-klein-review-a-case-of-mistaken-identity">Here</a>&#8217;s a nice summary for further background.)</p><p>In the last newsletter, I invited readers to consider how discomforting (and distrust provoking) it can be to encounter our AI doppelganger as a sort of caricature of oneself, particularly when that caricaturising reveals aspects of ourselves we would rather not confront. For this newsletter, I will focus not on <em>AI as doppelganger</em>, but rather how AI mistakes <em>us</em> for our &#8220;digital doppelgangers&#8221; and the distrust this produces.</p><p>Klein briefly touches on this phenomenon on her way through a deeper critique of a culture that demands everyone develop their own personal brand and the surveillance capitalism that preys on our desperation to meet this demand. She makes a distinction between the &#8220;aspirational avatars&#8221; that people consciously curate and the &#8220;real digital doppelgangers&#8221; created unintentionally as AIs assimilate the digital trails we leave behind. She explains, &#8220;Every data point scraped from our online life makes our double more vivid, more complex, more able to nudge our behavior in the real world&#8230; This machine made doppelganger&#8230; has a great deal in common with a human doppelganger: a person whom the world confuses with you but who is not actually you and yet can impact your life in profound ways.&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!QDsI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!QDsI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 424w, https://substackcdn.com/image/fetch/$s_!QDsI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 848w, https://substackcdn.com/image/fetch/$s_!QDsI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 1272w, https://substackcdn.com/image/fetch/$s_!QDsI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!QDsI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png" width="380" height="382.5221238938053" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:910,&quot;width&quot;:904,&quot;resizeWidth&quot;:380,&quot;bytes&quot;:224107,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trustbranknowles.substack.com/i/167725476?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!QDsI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 424w, https://substackcdn.com/image/fetch/$s_!QDsI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 848w, https://substackcdn.com/image/fetch/$s_!QDsI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 1272w, https://substackcdn.com/image/fetch/$s_!QDsI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a44d179-51fd-4d9d-88cb-5d85bde4eeaa_904x910.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>                                                 (Source: <a href="https://www.linkedin.com/posts/wired_naomiklein-doppelgaeunger-climate-activity-7105591589984501760-ojqy?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAEeHoR4BVEFWoSpXoMNdJ8WaQPphHjRamYg">WIRED, on LinkedIn</a>.)</p><p>Before I had this useful language of doppelgangers, I had explored the ramifications of AI misrecognising the people about whom it makes decisions. In a paper titled &#8220;Humble AI&#8221; (available <a href="https://cacm.acm.org/research/humble-ai/">here</a>), collaborators Jason D&#8217;Cruz, John T. Richards, Kush R. Varshney and I argued that misrecognition by AI, and the cascading consequences thereof, produces pernicious spirals of increasingly entrenched (though largely justified) distrust of AI. We described how AI systems involved in consequential decision making &#8211; i.e. whether to grant or deny opportunity to people in contexts such as approvals for loans, housing, bail, custody, etc &#8211; do not deal with individuals&#8217; <em>trustworthiness </em>(what would be a moral basis for granting opportunity). Though imperfect for sure, and susceptible to the harmful influence of prejudice, when humans assess trustworthiness, they do this by seeking to understand reasons, motivations, and circumstances that incline someone towards trustworthiness or not, as well as the presence or absence of morally excusing conditions for lapses in trustworthiness. What AI does, in contrast, is make a prediction about future behaviour based on comparatively context-free data about an individual&#8217;s past behaviour <em>combined with</em> an estimation of similarity to other people and their past behaviour.</p><p>There are of course criticisms of both the techniques and opacity of this clustering, as well as impassioned objections to treating people as one instance of a cluster, and for good reason. Klein touches on this, too &#8211; how as people move about in the world they are seen not as they are but &#8220;as a type&#8221;, held to account for others &#8220;like&#8221; them, their individual stories reduced to the collective story. Our focus in writing &#8220;Humble AI&#8221; was, first, to examine how people&#8217;s digital doppelgangers impact decisions made about them by AI &#8211; a sort of translation of the phenomenon Klein described to the AI world. But more originally, our main focus was on the moral perils of systems prone to mistaking people for their untrustworthy doppelganger and receiving adverse decisions on this basis.</p><p>In the paper we offer &#8220;Humble AI&#8221; as an antidote to characteristically &#8220;distrustful AI&#8221;. In coining the phrase &#8220;distrustful AI&#8221;, we sought to draw attention to the problematic tendency to design AIs to minimise the risks of false positives for those deploying them &#8211; e.g. avoiding granting loans to anyone who might default &#8211; and the subsequent biasing of such systems towards any potential signal of untrustworthiness and finding reasons to withdraw opportunity from people. We argued that just as people tend to be distrustful of those who distrust them, for fear of being unfairly treated, &#8220;distrustful AI&#8221; provokes <em>reciprocated distrust</em> by people for the very same reasons. We show how misrecognition, particularly when experienced repeatedly &#8211; as is the case for anyone mistaken for their &#8220;ethnic double&#8221;, as Klein shows, and which happens in the context of AI when systems take other AIs&#8217; outputs as inputs &#8211; breeds resentment, demoralisation, and contempt. By exposing these dynamics, we hoped to better understand people&#8217;s <em>totalising distrust</em> of AI, as represented by <a href="https://www.bcs.org/articles-opinion-and-research/the-public-dont-trust-computer-algorithms-to-make-decisions-about-them-survey-finds/">polls</a> showing that the majority of the public does not trust AI to make decisions about &#8220;any aspect of their lives&#8221;.</p><p>In the paper, we end by exhorting AI developers to design systems towards the more moral aim of <em>avoiding distrust of the trustworthy</em>. We offer affirmative technical measures for this more &#8220;humble AI&#8221;, such as continuously seeking out new evidence of trustworthiness that hadn&#8217;t yet been included as a feature in the model; doing more experimentation with decision thresholds to reduce false negatives; and investing in human oversight of cases where decisions are uncertain. But we also end on a note of hope about the potential of AI to enable us to overcome our human tendency to see people as their doppelganger, as sufficiently &#8220;like&#8221; someone else to be tarred with the same brush. &#8220;Human attitudes of trust and distrust can be altered indirectly,&#8221; we write &#8211; and typically very slowly, such as through cultural change &#8211; &#8220;but they are not under a person&#8217;s direct voluntary control.&#8221; The radical potential of these tools, thus far unrealised in the dominant paradigm of &#8220;distrustful AI&#8221;, lies in purposeful manipulation of AI systems&#8217; &#8220;willingness to trust&#8221; individuals to align with moral aims (such as, say, racial justice). But most of all, to be &#8220;humble&#8221; means to accept that the work of betterment, of doing better at treating people as they deserve to be treated, is never complete.</p><p>If you have directly experienced being confused for your AI doppelganger, I&#8217;d love to hear about it and about how it shapes your attitudes to AI.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-ea1/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-ea1/comments"><span>Leave a comment</span></a></p><p><em>If you enjoyed this newsletter, please click the Like button so that more people can discover it on Substack. And do feel free to share with anyone you think might be interested.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-ea1?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us-ea1?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[What can the doppelganger help us understand about distrust of AI?]]></title><description><![CDATA[Part 1]]></description><link>https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us</link><guid isPermaLink="false">https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us</guid><dc:creator><![CDATA[Bran Knowles]]></dc:creator><pubDate>Thu, 03 Jul 2025 15:27:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Oy-F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Everyone needs to read Naomi Klein &#8217;s book <em>Doppelganger</em> &#8211; her best yet, in my opinion. With this first newsletter (!!!), I&#8217;m kicking off a series of reflections on the book as a way of making sense of reactive attitudes to AI, because it&#8217;s too easy to dismiss discomfort with AI as a kind of ignorance, when it&#8217;s so much more than that, and so instructive if we take the time to unpack it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Oy-F!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Oy-F!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 424w, https://substackcdn.com/image/fetch/$s_!Oy-F!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 848w, https://substackcdn.com/image/fetch/$s_!Oy-F!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 1272w, https://substackcdn.com/image/fetch/$s_!Oy-F!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Oy-F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png" width="1456" height="1687" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1687,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7935934,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://trustbranknowles.substack.com/i/167441056?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Oy-F!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 424w, https://substackcdn.com/image/fetch/$s_!Oy-F!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 848w, https://substackcdn.com/image/fetch/$s_!Oy-F!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 1272w, https://substackcdn.com/image/fetch/$s_!Oy-F!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcb24e361-2253-441f-b32f-9182bb09e587_2225x2578.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>So for this first installment, I want to present the idea of &#8216;AI as doppelganger&#8217;, and what it is about encountering our doppelganger that is so unsettling. And I will begin a pattern of reflecting <em>Doppelganger</em> against another book: in this instance, <em>The Shape of Thought: Reasoning in the Age of AI</em>, by my dear colleague, Richard Harper.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Harper&#8217;s book is a critique of the narrowing of the concept of intelligence to mean only the form(s) of intelligence that can be supercharged with computer processing power. He terms AI premised on this limited notion of intelligence &#8220;narrow AI&#8221; (NAI). In this paradigm, technology can (and will) exceed the human mind, and artificial intelligence is, thus, &#8220;us made better&#8221;. But of course, it is a very particular &#8220;us&#8221;: a funhouse mirror version, with an enormous head and no heart, if you will. And in this mirror world, everything we do is reduced to &#8220;some kind of analytical act&#8221;.</p><p>While Harper did not speak to the uncanniness of the doppelganger, his analysis could be read as a statement about how our own intelligence is made uncanny &#8211; that intelligence as embodied/amplified by NAI &#8220;is a feature of human nature, [but] not its sum&#8221;. We naturally recoil from representations that are like us but for certain essential ingredients of being human. </p><p>It&#8217;s tempting to focus on the superficial ways AI reveals itself when it attempts to mirror us, and the visceral reactions this produces. (I suspect I&#8217;m not alone in being able sniff out ChatGPT generated content and at wincing from the stench of it, not to mention loathing the person attempting to pass it off as their own work.) But what I&#8217;m interested in here is not, in fact, the uneasiness of recognising not-quiteness, but its opposite: recoiling from the implications of <em>sameness</em> to this doppelganger.</p><p>As Klein&#8217;s book shows, the doppelganger motif is not about what&#8217;s wrong with the other self - say, here, its exposing fauxpas, or the many other complaints we may (rightly) have about AI. The story of the doppelganger is always about a struggle with the original self. The doppelganger shows us the part of ourselves we don&#8217;t want to see; it reveals to us our inherent duality, as both good and bad. When we are unsettled by this AI doppelganger, is it in part because it is forcing us to reckon with this side of ourselves that can be &#8211; and has been, though we&#8217;d prefer not to think about it &#8211; led by &#8220;intelligence&#8221; to some very wrong decisions?</p><p>My thoughts are just beginning to unfold, but my aim has been to draw attention to something akin to <em>disgust</em> that is animating people&#8217;s distrust of AI. I often see computing folk baffled and frustrated by lay people being <em>weird</em> about AI (What are all these intense emotions about?), and this may begin to help us understand why. There is something unsettling, indeed, about the kind of beings AI is telling us we ought to be and have been all along; and it forces a seriously uncomfortable question about what (and who) we have always been willing to sacrifice through adherence to intelligence.</p><p>I&#8217;d love to hear whether this resonates with you. Is the kind of intelligence modelled by AI really us <em>at our best</em>? And if other forms of intelligence represent our best selves, what would AI look like if it emulated them?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us/comments"><span>Leave a comment</span></a></p><p><em>If you liked this newsletter, please click the heart button so that more people can discover it on Substack. And do feel free to share with anyone you think might be interested.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://trustbranknowles.substack.com/p/what-can-the-doppelganger-help-us?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://trustbranknowles.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Practicing Trustworthy AI by Bran Knowles! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item></channel></rss>