Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are also some major ethical problems with an AI romantic partner. Can the company just infinitely raise prices and force the user to pay or give up a serious emotional attachment? Can the user transfer the AI to another service? Can the company code the AI in such a way that it makes the user more likely to become emotionally attached, e.g. the way tobacco companies and casinos engaged in ways of making their consumers more addicted. What if this happens implicitly, instead of explicitly— what if the AI learns to teach the user to sabotage their real life relationships so that the user becomes even more reliant on the AI. Something even more malicious: once a user is hooked, can the company use the emotional attachment to the AI to persuade or coerce the user into doing something like vote differently?
reddit AI Governance 1732740623.0 ♥ 52
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_lzavsps","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_lzb3x3y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_lzc4rhj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_lzazkzj","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_lzaudj1","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"} ]