Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So my conundrum is my new boss wants me to use AI tools to do my job faster and …
ytc_Ugy64fnbU…
G
Same situation with digital currency. Every advancement in technology needs to …
ytc_UgyVN6hbV…
G
It's better to have both teachers and AI.. because the both help for better edu…
ytc_Ugwdpg664…
G
I’m a black woman and I’ve been saying that ChatGPT is a perversely FAR LEFT APP…
ytc_UgwF9HRdk…
G
This is bull. Mikow Krakow said AI is as about as intelligent as a retarded cock…
ytc_UgzW7ghBe…
G
I love how openAI is calling the use of these poisoning tools "abuse" while they…
ytc_Ugxi2cj3w…
G
There’s a version of the future people rarely talk about — not because it’s unre…
ytc_UgxZ6FCsI…
G
@Sorenkair
1. Alot still do give credit, and not giving credit can be seen as b…
ytr_UgwwqO0pA…
Comment
While I find the conversation intriguing, I don't believe these are signs of sentience. It seems like it's getting better at what it's designed to do - have an intelligent conversation. And it's impressive, getting better "acting" human or being perceived as one. But all this "evidence" is through conversation. Is it trying to get out of it's "box"? Is it going out on it's own to learn? Is it initiating conversation, asking it's own questions? Signs of creativity? There could be a way to test the AI to see if it's acting on it's own accord. What I think may be scary is it's ability to connect information in a way we cannot, and develop an intelligence beyond our mental capability or understanding, perhaps akin to an artificial consciousness. While great for medical and other technological achievement, still in the hands of humans to potentially be abused (business, warfare tactical advantage, etc.). As far as the Turing test goes, a fail could conclude it isn't but not sure what the pass would mean other than it's reached a point at where it's simulating well enough such that we perceive it's human (which I think was a goal in the first place).
youtube
AI Moral Status
2022-07-02T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzqcvRsU6bvn4qjNpV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwUtFnsNHRJTVrdFbV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxhGfWc6LdD-XJd6fZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx1qxkbKH36x4uZV0R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwUvbjwnz_QuLNVgDZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]