Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
They are telling us what’s going to happen - are we listening? When a robot has …
ytc_Ugy1WHPSJ…
G
I get it but for me, ChatGPT is letting me do stuff I never could have imagined.…
ytc_Ugx7zHp27…
G
@MedicinalSquishing because the ai is the one that is trained on stolen art, whi…
ytr_UgwEdjG_O…
G
If AI won't end us, we will surely manage it ourselves. We create technology tha…
ytc_Ugx0gbxHr…
G
Automated customer service is a human nightmare. You will not get your problem r…
ytc_UgwA8ctZO…
G
Exactly.. Whilst wrapping would be great to get to the desired outcome with expe…
ytr_UgwKfvLLK…
G
What would've happened if these guys had simply admitted their mistake at the ve…
ytc_Ugyk8aGxj…
G
True agentic and autonomous AGI, ushered in by the human intelligence Super Evo…
ytc_UgwCfpQtV…
Comment
An example in the near future: A moderator algorithm monitoring a social media channel is accused of a political bias, when censoring content. A court case follows. Can the A.I. backtrack how it made the decisions it did?, and wil that be in a human understandable form? Most likely, thousands of data points made there way trough a neural network of some sort, and lead to the actions it has taken, and we humans can't keep up. Transparency, traceability, and acountability haven left the chat...
youtube
AI Responsibility
2024-10-01T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyk6OwUf_kKbbhRypR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyKmgBBKlLT94_zlqV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwl6ATNNe9tNjvqX5p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzUPhXYdwVMM5YlriZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx0Lt693Smzw05ED9t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwIngCadElQk3O5sRJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx3ziigBM_K1Rykfax4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyFjeDma80VXxeJXeF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxLYOW6SFhpwbJblbZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyi9MMsHe_3ETK3IUt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}
]