Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Okaaay I'm grateful that i always treat chatGpt great and i always flirting with…
ytc_UgwQ3EPhd…
G
I think “minors” means simple here, or extra characters, not actual minors but i…
ytc_UgyTztSH5…
G
Imagine the AI developers, giving certain boundaries and goals for AI such as do…
ytc_UgxmVgqmf…
G
I really agree with this conversation that AI and Elon take over controlling the…
ytc_Ugx0gZV9o…
G
Thank you very much Senator Sanders for caring about the exploited people like m…
ytc_Ugw1uSO-6…
G
I feel pretty much the same way I like to download and collect art from places l…
ytc_Ugyy9sbs6…
G
That’s not open ai chat got, it’s a different company, one with no track record…
ytc_Ugw9n2XjW…
G
I just love how a lot of AI's eventually become either sexist or racist, if not …
ytc_UgyccmWod…
Comment
AI's did not come to be by having to be aggressive. Also, they can be turned off - i.e. they can survive without food. They won't have any motivation to compete etc. They have no reason to be sentient. Humans need to be sentient or they would have been eaten by another sentient creature. AI's did not evolve from reptiles. There are a lot of reasons to believe AI's won't ever pose a threat to humans.
youtube
AI Governance
2023-05-10T07:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyzC7hUwoTwI5wcMg94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzMmPauW37PzKLfs_Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyc6IjXgHZoV9aMA3B4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyzahoGjH52MVDOcfl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyLz_SYDbl15lKzAGd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugz5cGMHLoKEApqHnhl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxeMOgkZzlIJemoz7Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx3CQtwnRmwdyvVVFZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugywi9ivsNsjeRpGG7p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqMz2eZgztZYkE2Jh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"resignation"}
]