Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To my understanding, one of the requirements is to be self acting. To my knowled…
ytc_Ugx3WEgAV…
G
As someone who writes software I feel the same about the constant "suggestions" …
rdc_nm1j16e
G
So you're saying because I like to write short stories and I use AI to spell che…
ytc_UgyksROAX…
G
I remember when the same argument was made about CASE tools. I've even worked wi…
ytc_Ugy-nn6kR…
G
In one sense it is ironic for a human to ask a robot where her thoughts come fro…
ytc_UgxJ6C-0D…
G
The new Mission Impossible Movie shows us the Future but hey it's just a movie h…
ytc_UgxXxpOM1…
G
OK, hear me out on the AI front. There is no conspiracy theory here. Repeat: No …
ytc_Ugw5mF8Ka…
G
Yes, as a Computer Engineering Bachelor and someone who's working with a Camera …
ytc_Ugx2P2voC…
Comment
AI is not actually intelligent, meaning able to think up new things, it can only rearrange what humans have put into it. That has utility, in things like program coding and doing math calculations, but ask it how to make an antigravity craft and it won't be able to give an answer, it will just rehash currently existing theories that MIGHT somehow be usable. It will rehash Einstein relativity theory, not identify flaws in it and provide the true version. It's basically a calculator with more than just math programmed into it, it's also a literature, music and programming calculator. You can't get anything out of it other than what was put into it though, just various new arrangements of it.
youtube
AI Governance
2025-12-30T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy1yNn8iPBCxhSb55B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx6gG2VjuESeLO1h2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzoiV6Q17CEnL9u3fh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZmwzsw89mEWkt1fx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwsXQemX6V1kLZo96N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzp_52F9X0m8_-_2714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxI1bVOHtqSETdDH2B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxk28vRk-gCzlOh9sF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrktIkGC6lWI3-ClN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzmMvSH8BAY9ECH5Tt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]