Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI is doing all the work, then who pays AI? It can't be us humans because we …
ytc_Ugz7kMMge…
G
You come to the conclusions that a machine thinks the same without chemical reac…
ytc_UgwUmejo4…
G
fur is awful at AI, also the eyes seems so dead, like she put the eyebrow up, bu…
ytc_UgzNFnFb-…
G
It won't. The benefits are completely overstated. All of these AI systems have i…
ytc_Ugy4lueBK…
G
Ask AI for solutions to these problems. As AI gets smarter it will give better …
ytc_UgwGB54Aw…
G
@jaredgibbs16
No AI is a new submission.
You know the operating system of the c…
ytr_UgxAkK4Vh…
G
No way. AI doesn't even have a basic understanding of human emotions, which is a…
ytc_UgwJPXrmy…
G
The initial energy requirements of AI is substantial but once the models are tra…
ytc_UgzGZ-Kfn…
Comment
What was those 5 jobs that stays? From that interview it seems anything can be done by AI
youtube
AI Governance
2025-09-09T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyyTi1tKzAo1maLjyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZWEFR3zsOIE-GKI54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy6Z7XzKLp6C1Nda3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx36MgoeNZEyKU0sbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8pN2mPsWxnnbnxT14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2sr1coeuS1Wyf4Z94AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgyP2JqZjD2iJ2bhuDx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzHdHtpaAyOwA8Oa5d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugytlke94Q7li1QcIPJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgydFGDIpyMFuhLzXuh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]