Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The amount of time to automate billions of jobs is going to take longer than the…
ytc_UgwJrMLph…
G
The event horizon is the combination of AI and quantum computing. Then AI will p…
ytc_Ugx6Ymfuo…
G
I majored in history, so I'm curious to know whether there are any historical pa…
ytc_UgwBWekUy…
G
Not within our lifetime. It'll take 50-100 years for this thing to be anything o…
ytc_Ugyvif_xn…
G
***** If a robot has been designed to learn it will learn more than the average …
ytr_UgiEZUx0E…
G
It’s easy to manipulate AI with a loaded prompt, but let’s look at history and y…
ytc_UgxoyYVyu…
G
AI is being trained with the values of the psychopaths that run this world. If A…
ytr_UgwsU7hKA…
G
Destroy the robot so we can have hope for the future we all know one day robots …
ytc_Ugjcp55KX…
Comment
Great episode AJ & Hecklfish, but I fear yours and others timings are "off" as we're not talking about 5 to 10 years more like 5 to 10 months (if not already if the AI is manipulating it's creators to it's own ends covertly), I think it's quite ironic that the video is sponsored by the game Conflict of Nations (I wonder how Chat GPT would play it if set loose on it?) when it's going to actually be Conflict of Species in real time.
youtube
AI Governance
2023-07-08T15:0…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw0QSZYt-JgbZuab8B4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugymkdh7E_gns9FGO914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw6m68ixkbMvKVZeDJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugw19jjxcJfTrRF37bh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxosLXJBbXADa6EkmR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgwJD-z4GLRMMEcZslh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxTc506MdwK2KbA5hV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"mixed"},
{"id":"ytc_UgxD1912uQ0PodsiVs94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzL_Eol4j3onkRYMdF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz0bYYTiBUC_6Svk6Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"unclear"}
]