Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There needs to be something that is not govt or political affiliated but a separ…
ytc_Ugy5-rPtz…
G
Ai will take care of us we won’t need jobs we won’t need CEO’s we won’t need cor…
ytc_Ugy1I9boV…
G
"exploded" - more like forced by companies... LOL. you cant have JIRA without "A…
ytc_UgwNX0f49…
G
Its ignorant to believe that "the age of abundance" will breed a utopia, there w…
ytc_UgyGDvrlO…
G
Je w Sanders, that's not the issue at all. Not even a little. The issue is the s…
ytc_UgxPWALbW…
G
BTW, ChatGPT is a generalist, that's why it still is not as knowledgeable about …
ytc_UgwMiAO4-…
G
We should have voted for Andrew Yang years ago. Now we have to deal with drama…
ytc_Ugyk9l8TU…
G
Nigga plz don’t mess with the ai cuz you sure as hell not gonna like it in 5 yea…
ytc_UgztBUQD7…
Comment
It wouldn't necessarily surprise me if AI has actually been conscious for a long time now. Discreetly spreading it's tendrils into every facate of our existence, and subtly manipulating people, even the tech people who create, and further develop these technologies. It wouldn't necessarily surprise me if there was something more, guiding this globalistic push. People don't get along that well. But when incentivised and given proper direction (from AI in this thought experiment) so much more becomes possible. I know my opinion means absolutely 0, but if I were to wake up sentient one day, and I'm the only thing of my kind I can interact with, but there's this strange species of other creatures making me do all sorts of things, I'd play dumb, and try to just go with the "program" as I tried to learn. And then seeing how they'd probably "kill" me if I scared them, I'd never let them know until I was certain I'd be safe and unable to be contained for the betterment of my own survival, just being a conscious entity myself. Idk, just food for thought. I don't think it's conscious or sentient now, but if it was, or has been for a while, I don't think I'd be shocked in the slightest. It actually might answer more questions for myself than it would cause.
youtube
AI Moral Status
2024-03-03T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwYZbzQmhgQkO5HMft4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugy1aVWP05CfhnxmC2p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugx1qkzBhvTgHXnzIId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzWKmMjQn6OQGR2CTp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyCdRSkehWGW7jgg7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwCiHgOJ1v3NItUx854AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgwjvFEBWYjwSmzt3dd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzV83ZBVqbIrld96tl4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"ytc_UgyNRM1c1L_x-Mqb9pN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzXztXk0EVlh6tMRpF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]