Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
First they lost the jobs in their home country to workers from India, now they l…
ytc_Ugz-UUXvx…
G
I am not a smart person, the logic and thinking of a machine is not understandab…
ytc_UgxK_-81-…
G
So will ai be buying all the goods and services, without the consumers all these…
ytc_Ugw2O8xTB…
G
this ai is ofccourse programmed to answer questions that is favorable to the int…
ytc_UgyDcDWMR…
G
That is why stopped doing digital art all together and I went back to painting o…
ytc_Ugz0iAzU2…
G
For AI to ever show any sign of sentience. It would need to convert binary into …
ytc_UgyFUj6-L…
G
I was promised the robot would work at McDonald's so I could do art. Wtf…
ytc_UgxpLGilg…
G
AI is trained off of already exsisting images made by humans. its not that smart…
ytc_UgwZcLJIt…
Comment
One question... Let's assume a hostile ASI springs forth tomorrow. How does it end humanity? What is the physical interconnect between the mind of the machine and the material word beyond? Seeing how we don't have the technological capacity, much less the implementations, to automate resource extraction and processing, product design, manufacturing and distribution... From where would the Terminator army arise, and how? "An ASI could destroy a city to erect a data center to expand it's processing capacity"... How? Like, physically, how? By what means? We're not at the point, in technological development or implementation, where it is physically possible for an ASI to end our species, and we won't be anywhere near there for decades, by which time if an ASI was going to be created, at this rate it would be. I think we'll get the Skynet monstrosity trying to take over what little internet-connected infrastructure there is in the world currently, and us reacting by simply permanently disconnecting all infected devices from electrical power. It can't currently stop us, does not have the physical capacity to prevent us, from simply unplugging it. I'm not saying it's not a serious threat to guard against, I'm just saying that, as the world is today, it's not possible for it to make us extinct. Not that I can see anyway. Drive us back to pre-internet technology? Certainly. Extinct us? Not currently possible without human assistence. And I'd also like to bring up that all the nukes on currently deployed, as in ready to be fired, could not do the job either even if the people manning their launch stations were deceived into firing l of them at once. We'd be set back badly, the damage would be extreme, but it wouldn't end us as a species.
youtube
AI Governance
2025-08-26T17:3…
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_Ugy1c5J6oNiuwoRPJut4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwautmRXRP5iAlMWit4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgynZL9GNfKigdT9I414AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyfYxohq9W38MmOADB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZ0YqMbvvnWd3dP8h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]