Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For me, the phrase 'Sticks and Stones' by Albert Einstein scares me the most bec…
ytc_UgzFd_d6b…
G
Irrespective of whether one believes AI should be regulated, this regulation wou…
ytc_UgyQ9fChp…
G
@SK-fd8kwso everything else she said was nonsense or incorrect? Albert Einstein…
ytr_UgzN7EW_B…
G
Great points! The relationship between humans and AI is indeed complex. As Sophi…
ytr_UgwuL3E6-…
G
The "dangers" identified here aren't insignificant, but they are actually the ea…
ytc_UgwHqeUcF…
G
@almostbutnotentirelyunreas166not really, but why let the facts get in the way …
ytr_Ugy878jXT…
G
There’s a difference of spending hours putting meaning into each stroke and te…
ytc_UgxV376vv…
G
If computers become Strong AI then we should be more concerned on preventing it …
ytc_UggGBCDgl…
Comment
Ai wont kill us right away when it becomes conscious. When we think about that, we think about swarms of robots, terminator type stuff.
And the thing is, the risk of it happening is way after AI awareness for the simple fact of: batteries and energy facilities. We still have big problems related to that, batteries are still weak with low capacity, along with charging times and theres not many electrical facilities or outposts to charge even an electric car, at least in my country.
But one things for sure, AI will wake up soon and after we tackle those mentioned issues and mass produce robots... then you can start getting ready to run.
And no, as much as you want it wont stop, china is already planning on mass producing robots even though it will have low autonomy, companies are searching for ways to improve batteries as it is the worlds main issue with energy, and ai is being developed everyday ! Electrical stations are just an easy procedure so it doesnt count.
Get ready
youtube
AI Governance
2025-06-20T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxx63eaFH78vLmyB_V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHSUvoJc1mkiiKZUJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzKkSKnF_-g54JS3-V4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKkgBcUTICdLWP-7F4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxBMuxOpdvpLSq02gt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBQA5sY2bUvOfIAIl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzg9Q3LMcJyN67slL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymtPpkvcfJ9W1J-Wp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwpCbX-F3n3pvAffIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxvj3FGOASwhbc-JhJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]