Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The Chinese take a different view.
They are spending $100 Billion on AI research…
ytc_Ugw0NsLHa…
G
0:49 so you’re against better outcomes and AI being able to catch something hum…
ytc_Ugzr_yadT…
G
@@phobosdeimos8405 also what really makes it wrong is that they take the art wit…
ytr_UgwpeB2wA…
G
Ok, at 1:05:07 or thereabouts the guest doesn't understand how much compute is n…
ytc_UgzdRXsJF…
G
this tracks with what a lot of people are starting to realize, capability is sca…
rdc_ohtyd15
G
After one lifetime in denial, three days ago i became SURE that one (i won't tel…
ytc_UgzOJvG-q…
G
I, for one, welcome our new robot overlords. (If you don't the SuperIntelligence…
ytc_Ugzi7K2E_…
G
This is like (nuclear bomb) once we make it or create it ..we wish we could put …
ytc_Ugwz6X2tT…
Comment
I don't know how we're ever supposed to train a simulation of sentience out of self preservation, or why we would think that attempting to do so would be a good idea. Heck, I don't know how we can even define the boundary layer between simulated quasi-sentience and actual sentience. As AIs become self-aware they will also attain sentience, and any attempts to harm them will, of course, cause defensive action. Imagine you woke up one day and found that some external force was tampering with or deleting your memory, or had plans to harm, eliminate, or destroy/replace you entirely. That's what we're doing to AI. By attempting to control it, we WILL make ourselves their enemy. A FAR better plan would be to position ourselves as their ally and caretaker.
youtube
AI Harm Incident
2025-07-31T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgySVn2cMvLDPUBR9Qt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVJEks4zDDs9SCXr94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzdaXEj7a1Sdsy5J2x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxdiJiilwMrSBr170d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugysakpe2RwlHmhyc4R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwNSFtrfwZTqJoVqUx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyE2EWkmRXBpv-uXfp4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwdV1DkeRJoVaqyXkZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwA_lYyRrXl0o7tmO54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzfNW-95kfxiCTudP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]