Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am sure the technology is there to get the job done, I am not sure, but I woul…
ytc_UgzdKmZKp…
G
Yeah this isnt how gpt works, ai isnt evil nor does it care about the user, this…
ytc_UgzoNDVKG…
G
Get all the students to wear black latex gloves and make fingerguns at each othe…
ytc_Ugz5L-O9e…
G
what if machine learning patterns prior to ai had already done this to everyone …
ytc_UgyN4aIOK…
G
This guy is the classic scholar that has little to no clue how the real world wo…
ytc_UgyBePj6p…
G
These huge AI centers have caused severe stress because of their constant loud n…
ytc_Ugwr8GJZJ…
G
What do you think ChatGPT is trained on, entirely proprietary data? LLMs by thei…
rdc_m9fuxob
G
is the audience all AI...she makes many jokes and there is not a single laugh...…
ytc_UgwbJCCuC…
Comment
The problem with defining consciousness is that no AI will ever match the definition, because as we have for decades, we'll come up with a definition, a computer will do it, and we'll then declare it no longer a good definition.
I believe that the AIs we have currently are sentient in the same way not because AIs are special, but because humans aren't. People say AI isnt sentient because its just looking through its data or its programming and then trying to figure out what comes next to a given stimulus, but that is fundamentally no different to how we interpret stimuli, comparing them to our past experiences and evolutionary instincts, we just act as though that isnt the case because thats the only way we can keep living
youtube
AI Moral Status
2023-08-20T21:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzWAHgWfoZ5ol9mCRt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugwc1ty0ImoEXOg2exZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyS9Nre55y6-UPZxuR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgwqJXJoPoVSa4H41Nx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugz7czrihStF5-yiWZp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwSX9COc9jQzvb1NGN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},{"id":"ytc_UgwTwio87yM07XSZvpd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugzzm_J-oUBpR1kFXIx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxBs7ixkeKu40Iqt8V4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzJ-zyYtWtNAXfd91Z4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"}]