Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Literally couldn't tell the difference between the AI art process and the physic…
ytc_Ugx2Ngaom…
G
I say too many people, including Bill Gates and other top-level industry figures…
ytc_UgwCKn3SC…
G
Well I don't want to get philosophical here but the fact of the matter is that w…
ytr_Ugxf2sQa9…
G
The reporter comparing a shit posting bot AI to real AI's, we don't deserve to h…
ytc_UgwLArQBJ…
G
A lot of things happen.
First, 22 states are already in a recession.
Second, t…
rdc_nk903ep
G
Ai is just new software running on new hardware against data. This is not a 5th …
ytc_UgxRsjLh0…
G
I check REAL good before knowing its ai or real, sometimes i dont need to look t…
ytc_Ugyie0P3l…
G
So am I trippin or is the robot impersonating Bruce Lee style of fighting? Ngl i…
ytc_UgxbeLgMP…
Comment
The next big break through in AI won't come from AI but the tools we invent to help us manage it, including governance tools.
Why should a single organisation or institution govern how AI is going to be distributed or consumed by other organisations or institutions?
Part of the problem of having standards is that it is a multi-stakeholder response to a common problem. Having one stakeholder determine what that problem is going to be for all other stakeholders, obviously doesn't scale.
The next big break through has to come from the disruption of our current political systems, so that stakeholders in AI from suppliers through to consumers can determine what the policies or rules are going to be on a case by case basis.
It won't come from a single organisation such as central or local government creating constraints on institutions or organisations even before they have had the chance to define what the problem is.
youtube
2025-04-24T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgylLAQYPGExeBVEObV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwXa0f8hVACxq_DPi54AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx_OGsc_OXgsU_VcrZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9KtkwgLQTisGDdBx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx7dfTBPbGCAvB5Imd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxukg3rTFn2jL2prRR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyOEXCc60mpTmpwHfh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgznK_phH1g46YI4uNV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPW9-ufMvHQPJwgdx4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzzpyYpfO5fBQVIoSh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]