Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My last doctors appointment was a doctor using AI on his phone for everything. A…
ytc_UgwVZ3Dhb…
G
Unless we shut it down completely in time, we are moving forward towards the end…
ytc_UgxFW4cXN…
G
"AI will replace everyone in 12-18 months" - every AI company CEO 18 months ago …
ytc_UgyKGzrz9…
G
This is just the beginning of racism thru artificial intelligence and cybernetic…
ytc_UgwKcvJJO…
G
Yes....Yeeees....Do it more....Make those so caller "Artist" suffer.. Let the Ha…
ytc_Ugzd8B5ML…
G
Nope not trucking. You tell AI to go deliver a 70 ft truck at 911 Elm CT, Nashvi…
ytc_UgxIzIuqc…
G
This dude is so naive and can’t take a stance. On the one hand he preaches the d…
ytc_UgyXUC38w…
G
Funny how they introduce AI to supposedly improve performance and cut costs, yet…
ytc_UgxFm4keO…
Comment
However you're wrong about AI, it's not that simple to make the right choices. A choice requires a conscience behind that choice; it's not just a matter of computing power, it's an awareness of the impact each single decision made has on the real world. We're now in a phase of exponential growth, but soon AI will hit the limits of pure computation, discovering the limits of logic, simulating surreal scenarios that lead to no results. Chips will be needed that shift mathematics from pure to stochastic computation. And this in itself will already be a step towards an AI dependent on the conscience of real matter. Human conscience originates from the cosmic field, generated by the computational power of our entire universe. I strongly doubt that a unit of insignificant power like the one we could create on Earth could even slightly change the overall simulation. If you want to call it a simulation, since we're talking about living inside a simulation here, that in itself means very little.
youtube
AI Governance
2025-12-07T10:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0ETuPMRYV-zV_sz54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyNKBI_bFLRAFYkjGJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw0XsiAmyyE_s4fOyZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUAcAsymKsZ469M9N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx5Oi9HPHA88WqMl_Z4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw_rauWQn5ie70aCJV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyhEWbfy5Yf4VGIpft4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEHKotcYZxrhEkOwZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz83SS1Cl0rZwNJ6sp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwfEHIKsD5uOZvsGTl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]