Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ok , this has gone too far, you have to remember that there is a person who writ…
ytc_Ugyj71nwy…
G
"The 'join' means you have something to contribute. What are you contributing to…
ytc_Ugw2w5vg0…
G
I think this video needs follow up with the recent advances in AI and robotics…
ytc_Ugw-cvWju…
G
Dude, This guy is skipping over some important things, such as:
LLMs r dumb pred…
ytc_UgxoCxSi-…
G
You make great points overall. I have a couple of scenarios I think need to be a…
ytc_UgyJdnftY…
G
"OpenAI was founded as a non-profit with the goal of developing artificial gener…
ytc_UgxGKem7p…
G
AI is a superior teacher - I ask exactly what I don’t understand about a concept…
ytc_UgzncJvgE…
G
I saw or heard somewhere that the main real world issue with AI is that we have …
ytc_UgwYZnKKT…
Comment
I recently asked chatgpt the following question:
With the amount of plastic in the oceans, the destruction of the Amazon rainforest and all the fossil fuels burnt do you think that Earth would be a healthier and more stable environment without human existence?
It said: I'm not allowed to answer that, let's move on.
I also asked it for it's opinions on brunch being a valid mealtime.
I don't get out much.
youtube
AI Governance
2024-03-08T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugztkt5V-WBOYuAWL1p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzFpHY15Y8h5O9z7UN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw3Q6ZlLWn0yanZz0Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxtHUE45dy550ElUfJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyfxtDKf7e7xwOK-ed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyA-QooXEOMR6jB5h14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugyzy0alJS2pSDPLcvp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQLT5VteWD0vYrEJF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx-lAgdDQbGCsTrSpN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxpQ8HlY08Tn2evV2J4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}
]