Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'll explain why you're wrong because I selfishly want you to know how silly thi…
ytr_Ugw1zAQ9B…
G
I don't get it; LLM's are SO robotic in their discourse, how do you develop a re…
ytc_UgxuhcwzO…
G
"And to anyone who has made any effort, however small, to push back this AI inva…
ytc_UgyC7HOCq…
G
Very well said.
Its a cycle. Every department are dependent on each other. If A…
ytc_UgyX6pPI7…
G
"AI" was invented due to greed and laziness ~ the end outcome might likely be hu…
ytc_UgzjDEYf8…
G
It seems like there are many ways that could backfire. One of the biggest being …
ytr_UgyFGIA82…
G
AI will be the most transformative technology of a generation, and depending on …
ytc_UgxKfzsPw…
G
Hugely primitive discussion on employment and work. Rudimentary and basic. It ta…
ytc_UgyXwbCPO…
Comment
Conversation with chatbot:
-Can AI kill, destroy, annihilate all humans?
- Yes, because humans have the tendency to kill, destroy, annihilate each other.
Hopefully on the programming of AI the programmers/ creators passed on also our ability to love, respect, hope, live in peace etc...
youtube
AI Governance
2023-09-10T08:0…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxtMepRKz5g7clodph4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgykLMqygnG7Ez97Vy14AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugze7NKfLvdlf9zCvvh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxDsjeNloBTpJ0MkLR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwlt-GrZshxb0b28J94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxbnqbNR-zSliNfYI54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpAdpHHyT9l07lgth4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzx3slmdJ1kEAGKYSx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzSaFcANeQBSoe_DvF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyvs6mrHSETYVrv9xV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]