Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I saw this coming when Sarah Connor told me it was coming.
Who would have thoug…
ytc_UgxIyiVWc…
G
@cinnamon8884 No, because those are crimes. Deepfakes aren't, because there's a …
ytr_Ugw64e9Tp…
G
Why do these people just assume AI would want to just kill us all? If we can mak…
ytc_UggvPjoPg…
G
This was literally my introduction to the channel. I took one look, saw an aweso…
ytc_Ugx_UAEOt…
G
That's right Mq. My problem is that it has taken far too long to create mobile …
ytr_UgyZbpcPa…
G
I've moved to AI doctors years ago for advice. Human doctors are not getting my …
ytr_UgwmOnVV4…
G
people will live on the streets, huge criminals will be, problem with food, hug…
ytc_Ugx6sAPka…
G
My quantum algorithm is a SPECIFIC HIGH FIDELITY TINY MODULAR SIGNATURE that use…
ytc_UgxeP_F95…
Comment
Will forcing (through legislation) AI companies to constantly present in a human readable format acknowledgement of the official source of the information it presents to help shape/control the effect AI may have on us? AI is getting its information from various sources. And then financially compensate the source of information. Some bad actors. Some good. Some who don't know what they're talking about (A1 in schools :). For example, ACD is a real thing, and so by making AI constantly acknowledge the source of its information, it can be viewed with an element of suspicion by us and that will make us double-check everything AI does or presents as the answer to the query. Suspicion and uncertainty creates control. I'm not an AI expert or can even pretend to be. But, it's just a thought.
youtube
AI Governance
2025-09-09T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzV_EF9fQhDODMLk3R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxd1rpOkqzq7hREItN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyTjSeg-EFwYlQEd9B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyFvS7bX0C197aMc8t4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfGMJIL3UV-1SpswF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyZUC5g4nf2rW1V2Wx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBSSoqbooFBbrwnRN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzMJpF1r4644qJ2R594AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZU9_RQDuLR4Kf4UZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzIGRrzRaQLOVQALI54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]