Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Did you hear about the so-called “tech” that’s been reportedly developed whereby…
ytc_UgwOILDAl…
G
Mai bhi time pass ke liye ek baar discord ai bot se aise hi baat krra tha and w…
ytc_UgyJ3p5ri…
G
Whether you want to believe or not, ROBOTS AND AI is a tool of SATAN to REPLACE …
ytc_UgwnqPaY6…
G
You missed one very important thing behind all of this - techbros and conservati…
ytc_UgyEOxDPi…
G
We're glad you enjoyed the interaction with Sophia! If you're interested in more…
ytr_UgzWbXy2B…
G
I gave ChatGPT the question formulated by ChatGPT by the interviewer about moral…
ytc_UgwK_zSJz…
G
“Our legal.” God, what a puff-up from that James character. An innocent question…
ytc_Ugz-K_XTd…
G
47:01 is he seriously talking about ChatGPT like a toy. He has not built a produ…
ytc_Ugwv72Qck…
Comment
Yes, the problem is if AI is given some wrong Priorities and Values, that it sees as most important thing to protect or fight for. For example "green agenda". It can switch off humankind to fulfill that goal, "the most important thing". If you feed AI with twisted input, the output will be twisted the same way. Its exactly same with humans.
And when some people have access to "put ideas" into AI head, the danger remains. Someone could think "would be really cool to make AI fight for my ideas"...
youtube
AI Moral Status
2022-07-25T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgyCC4zbYcAl-LDSaex4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxRW-SXnJ_2nda2NeR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxuraAQhRw5sGEpeK94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwCeqAbtQzH443mmRF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxxF0NcG12jw-l8SeB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}
]