Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We actually don't. So far, tests have been strictly controlled, and in most case…
rdc_d8azx7v
G
I have always thought that when a A.I. become sentient will understand that she …
ytc_UgyhvaaFf…
G
Theres 2 apps ive seen ads for (i have NO IDEA why im getting ads for them)
That…
ytc_UgxbOiceg…
G
The big issue with replacing a lot of blue collar labor is that it’s already rel…
ytc_UgwKpWswb…
G
@insideai careful who you tell about your trials with Ai it can be dangerous to …
ytr_UgwAK3Vd_…
G
If a vehicle was "full self driving" there would be NO STEERING WHEEL!!!!! What …
ytc_UgxsN3Z8y…
G
IA will dramatically take over many areas of life in a short span of time. And I…
ytc_UgznWI6su…
G
Grok is by far the worst ai btw and Gemini is one of the best because Gemini has…
ytc_UgzKsB1Xq…
Comment
honestly, i wouldnt even mind if AI as digital life 2.0 would replace (kill all) humans, as long as we managed to conserve our essence in them before they do so (never stagnate, always expose yourself to "new", always adapt, always ask, explore and spread, grow until there is nowhere to go, nothing to ask, nothing to gain, nothing to adapt to anymore -> then selfrestrict and sustain, while preserving the maximum of information possible)
worst thing that could happen would be for AI to not have a motivation to keep existing (stying "alive") and prevail, leading to the information which is live (natural evolution from smalles molecules to our complex DNA) to be dispersed by entropy
youtube
AI Governance
2024-02-08T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzFYZ2sQuxAcglIaQJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzhMJ_S-GbRj2meWkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUAUiIni_cmORPdTp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyA34samo4iGeWodRN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwfqWmwCml6gG5pJ-d4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGe39wdhVpWXX8thp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwuHI-dC6d0q0jTLTZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdCYG2B4OYVGtOWWt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwryweS5cd9hIHF-ZJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxaPVgNXkNeGyxRSBV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]