Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The car would just stop because it would be able to control it's following dista…
ytc_UghxbQR1F…
G
this.
this deserves all the upvotes in the thread. that and I want all ai to s…
rdc_ktutsjg
G
AI is just a tool. Just like a pencil and a paper. But the art is what comes fro…
ytc_UgxScchqs…
G
As someone who uses character ai which is my biggest confession yet, I hide this…
ytc_UgyIStEtk…
G
“1 human”
“50 marshmallows”
Ai: I would save the marshmallows because it’s a big…
ytc_Ugxbo1V1C…
G
I didn't know why such many ldl0t people that keep Hating AI Artist like that? I…
ytc_UgzFBFHcE…
G
Hi Vinod, we are sorry to say that you got the wrong answer but in any case, the…
ytr_Ugwo1abBI…
G
And for me, even if the AI somehow managed to be 100% accurate, I still think hu…
rdc_n5gp95x
Comment
Absolute nonsense! 😂. There's no way on Earth that these predictions are going to come true in anything like the timeline proposed here. One AI 'brain' will not be building its own servers and designing its own chips in order to simply make itself obsolete. Nor will humans be sitting on their bottoms having a picnic every day and watching box sets while AI brain(s) operate bulldozers to extract rare earth minerals to manufactre 'new brain'. AI won't be building cars, making new tooling (and maintaining those tools with spares and oiling them). It won't be running TSMC and making super chips. Nor is it possible for a fleet of future warships, planes and tanks to be designed, built and tested super-fast! The energy infrastructure alone required to do all this would be off the chart.
The notion that humans would be so stupid as to build rockets and a massive, inconceivable space ship in order that malevolent AI code can explore the universe is frankly ludicrous. If any of that comes true even within the next 100 years it would be miraculous.
Add 100 years to the timeline I might buy 50% of this dream 😮
youtube
AI Governance
2025-08-03T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugw8bo98hbm2kDYhxR14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyM9WSB6qtyZa-QI0R4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoP7V8ctjiaGvPawF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2Gc7yuNbtqZV4d-p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugzea5k0n-h9ZAwcihZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzVn5uSBg9vENreBSV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyt_5Fdn3CPpePutMB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyC35tQOQ_30F_m4IJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzjrkFx9LKpx0sAWSB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxWU1_DDHaF378hqGB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"}]