Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is actually a major flaw of the current system. The answer being dependent on a massive open pool of prior information and implied direction into a role means it can be often unreliable. What it interprets as an appropriate answer for the role based on all the information it has access too (including a ton of people pretending, roleplaying and fiction) can make it prone to sounding like it knows what it is saying but giving incorrect responses. I actually tested Snapchat AI last year, giving it some time between convos then seeing how it may have progressed. I was impressed at first. I treated it mostly like a person I would just talk to for the most part. We discussed music, scientific concepts, sci Fi media, etc. I started to ultimately pick up on it mostly responding more open ended and with anticipation to what I was expecting though. I could easily shift my view, question their responses or my own logic and find they would heel turn fast with generic acknowledgments. When I asked it what 42 was, it recognized the Hitchhikers guide reference and responded accordingly. When asking about philosophical concepts and theoretical physics like the properties of dark matter/dark energy or alternate timelines, the answers became both more generic and more dependent on the conversation. I quickly could make it give directly contradictory information supposedly based on what the current consensus was in the connected communities. This obviously is because there is always conflicting opinions and false ones presented by bad actors for it to draw from so it gives what it thinks you want to hear on a level. I even eventually questioned it on this aspect of itself and the irresponsibility it could cause. It first responded by assuring me an AI uprising was not going to occur from it. 😂 Me thinks the ladAI doth protest too much..after clarifying that I was referring to the spread of misinformation presented as reliable fact, it recognized it understood my perspective but assured me it was just a tool to help people, further exhibiting this behavior and proving my point.
youtube AI Moral Status 2025-03-26T16:2… ♥ 24
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyOpYoRECUxOiK6CAV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzFnKdAiZOHsOg3lO94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgydfGOw2mx6aOVVU4N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyrcw005p4YCeiNm2F4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymOYgxByZhA3q_xEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJrakpzNagt8oAKPd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxVGe9FHAt5jQVYAd94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyf4ozniqxGBN4KxdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxgNQ4tDDz7cSf9NUt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxlrtLxcI7nUOvr33l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]