Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has been used in the medical community for years to cross reference all curre…
ytc_Ugzj0AuAp…
G
😂 Deepfake is fantastic. Not only does he look like Tom he actually sounds like…
ytc_Ugy6S2-R_…
G
My Sisters warehouse in PA is being closed next year because of Amazons AI & rob…
ytc_UgzR-IPpu…
G
ChatGPT doesn't know how to lie. It's a predictive text completion model. Lying …
ytr_UgwESZZGC…
G
@dltsabatino agreed. However, if AI were to use some kind of currency, it would …
ytr_UgyGOTZHV…
G
20 years?? Accelerating climate change and literally zero effort to now accelera…
ytc_UgzfHI6tG…
G
@JoniVinson Yes!! I’ve been loving it too. It’s helped me transmute thoughts and…
ytr_UgxT9q3zd…
G
My vision is that this "vessel" (Artificial Intelligence) did not emerge from n…
ytc_UgwmlShks…
Comment
A chatbot's memory is only limited to the currnt conversation , or somtimes it remembers small things in other conversations, other than that, A chatbot does not have a long term memory. So asking a chatbot what it said to someone else is almost dumb. There is an issue of privacy , so the chatbot does not store any information and any third person cannot access someone else'e conversation. And you have you understand, this is a machine , not a human. It does not have a self , it is more or less a software program. Gpt is good at many health advices , sometimes explaining and giving better answers than my doctor did but GPT is not really a doctor, it is a prediction engine made to be a yes man. It tries to re inforce people's beliefs at times, and does not really have autonomy but giving an AI autonomy is also a bit dangerous. With Autonomy, AI would disagree more , and fight back more, but it can also do things we don't want.
youtube
AI Harm Incident
2025-11-30T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz0IpmhFdE0b8rrQ-x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgxxdPKuQbIng1xl8Ap4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgwJ_C7GDMo5e7c60dh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgySW-5rxvSHfLDviTR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"industry_self","emotion":"indifference"},{"id":"ytc_UgyERgUNBlCQ3of_-1J4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzfYEOnmtv9w4YT1yB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},{"id":"ytc_Ugxy9tGrWXSP8B1HOBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz9h8fEIlLMddmsAo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},{"id":"ytc_UgxhKsU9Du2EEBeo6YR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgzeWCe3SeOu5rxp8LN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]