Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@_sayandas I completely understand your point about how AI will create new job o…
ytr_UgyOyVgpu…
G
I for one would like to take this opportunity to welcome our AI overlords...
…
ytc_Ugyk1KtPl…
G
Here's what I think. Both Alex and his ChatGPT instance flattened this argument …
ytc_UgzrDS68U…
G
Every tech company lies about everything. Dont trust Google. They have every mes…
rdc_ko8r0fh
G
@markcrawford5810 Digital and traditional is just a medium. And its being crea…
ytr_UgwlpQJbf…
G
I am so glad to hear that humans are the ultimate source of artistic perfection …
ytc_UgyeldY1j…
G
I think the only reason someone would freak out like that is because they have s…
ytc_Ugw5N--2I…
G
F*** I do the same thing too no complaining no whining no saying I need $50 an h…
ytc_Ugy6sJS9E…
Comment
Impossible for AI to become a danger to us when the become smarter for one simple reason. Wants and Needs. AI will never be capable of wants and needs.
In order to have wants and needs you must have emotional feelings which AI will never attain. Wants and needs are to make us feel more comfortable in our lives. So with AI not or never have feelings, they could never think of what they want or need, therefore they will never (want or need) to take over.
youtube
AI Governance
2025-06-16T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz3zzyEG5V68b3yGjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRgWpo1KFa49Zaj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLmHOY9xh-ckjFXrF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUpib1hcRSwdrVQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmoRo7KfUvI8YKBQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbkmUCxPIA6RSM2OJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwocjrzEwsqLjn51814AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ9iocCvn77xmtO3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwp3htsGjG1Y9fKDph4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_tRF9MjH7Kx8szIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]