Fascinating. My concern is on the consent and control aspect. Relying on APIs from recommender systems which are algorithmically opaque and may serve a commercial purpose that does not align with my own interests is problematic. I can't foresee the consequences if I don't have understanding of the logic behind the recommender selection process. For example, if content is recommended because I find it addictive and it increases the length and level of commitment of my engagement may be providing the platform with optimised commercial opportunity but might not be in my best interests. For example, my news feed is filled with content about killer whales because I am terrified and fascinated by them. There may be other news out there but I mainly see orca related stuff. All the time. Wouldn't an assistant AI make this type of manipulation even worse?
Thanks Penny, appreciate your comments - some great points! You're absolutely right, issues around algorithmic transparency are still to be meaningfully addressed and I believe that the best approach currently is to give users as many controls as possible. Features like I've described should all be opt-in, any integrations are described and explained in simple terms to users, and it is very simple for them to be able to opt-out at any time, with all associated data removed. For user trust to be established with digital companions, which is think is vital for their success, I think this its important that they are built with "explainable AI" principles to be as transparent as possible.
I don't think a digital companion should compound the echo-chamber issues that you describe, in fact I think it has the potential to be part of the solution. To compound echo-chamber issues, a digital companion would need to be built on similar recommendation algorithms and I think that's actually the antithesis of their purpose. A well built digital companion should be objective and impartial, helping users sift through what's important and what isn't.
These are obviously all complex issues and further research and development is needed, but I think if built in the right way, with transparency and user control at the centre, digital companions could be really transformative and change how we interact with technology.
Fascinating. My concern is on the consent and control aspect. Relying on APIs from recommender systems which are algorithmically opaque and may serve a commercial purpose that does not align with my own interests is problematic. I can't foresee the consequences if I don't have understanding of the logic behind the recommender selection process. For example, if content is recommended because I find it addictive and it increases the length and level of commitment of my engagement may be providing the platform with optimised commercial opportunity but might not be in my best interests. For example, my news feed is filled with content about killer whales because I am terrified and fascinated by them. There may be other news out there but I mainly see orca related stuff. All the time. Wouldn't an assistant AI make this type of manipulation even worse?
Thanks Penny, appreciate your comments - some great points! You're absolutely right, issues around algorithmic transparency are still to be meaningfully addressed and I believe that the best approach currently is to give users as many controls as possible. Features like I've described should all be opt-in, any integrations are described and explained in simple terms to users, and it is very simple for them to be able to opt-out at any time, with all associated data removed. For user trust to be established with digital companions, which is think is vital for their success, I think this its important that they are built with "explainable AI" principles to be as transparent as possible.
I don't think a digital companion should compound the echo-chamber issues that you describe, in fact I think it has the potential to be part of the solution. To compound echo-chamber issues, a digital companion would need to be built on similar recommendation algorithms and I think that's actually the antithesis of their purpose. A well built digital companion should be objective and impartial, helping users sift through what's important and what isn't.
These are obviously all complex issues and further research and development is needed, but I think if built in the right way, with transparency and user control at the centre, digital companions could be really transformative and change how we interact with technology.