-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarifying data collection to end users #172
Comments
oops, thinks I put this on the wrong issue earlier, sorry! Also, the help awesomely describes the site's use of things like the analytics and error reporting services. Teachers or parents may read this copy and have the expectation that this is the exhaustive list of where the data that they submit is stored. But it doesn't mention anything about sending data to IBM services, and that data storage, so it seems like maybe this should be updated to include that as well. What do you think makes sense? |
Yeah, I'd be happy with a pull request that improved the wording. The English version of the text is here: taxinomitis/public/languages/en.json Lines 1001 to 1010 in adaec31
And is displayed here: taxinomitis/public/components/help/help.html Lines 362 to 381 in adaec31
|
@dalelane awesome! I opened #173 as a first step, thanks for your help! 👍 I also thought it might be good to put a notice and link with guidelines directly into the point of the app where folks are adding training data, what do you think? That way folks can realize where the data is going, even if they don't go dig into the fine print like I did. ...one approach might be to add something like this: The idea is that this might help prevent any surprises, especially for young children who are new to ML and services. Thanks for listening, and I'm happy to help out with this too, or with other ideas you have. I didn't do anything in that first PR, and wanted to see what you thought first. 👍 Thanks! |
I've made an effort to keep the UI for kids as clean, simple and uncluttered as possible. The majority of the site users, as far as I know, are primary school age. I think this sort of legalese warning is unlikely to be useful or helpful to a 7 year old. At worst, it'll confuse them. At best, they'll likely ignore it. My (admittedly untested) assumption is that a message like this won't really be an effective way to address the issue here. The simplest approach would be to separate this out for teacher and student users. Give the teacher/parent users all the detailed info (links to more info, explain what is happening, explain the implications, etc.) and keep it out of the student / training UI. I'm very comfortable putting any information in the hands of teachers/parents and letting them decide what to do about it, and what is appropriate to tell their children/students. The more nuanced approach would be to also add something to the training/student UI, but make it much more child-friendly. If anything is going in the training UI, it needs to be something that would make sense to a young child. That needs more thought about where it should go, the sort of language that should be used, how it should be explained, how it should be presented, etc. |
@dalelane This is awesome, thanks for sharing all this! ❤️ 👍 Yeah, I think your question gets right to the heart of this - as we make awesome new ways for young children to make their own things with computing, and use more powerful tools like third-party services, how do we teach them about risks along the way, and help them do this ethically and safely? I'm super excited about this project and others like it that are trying to tackle these hard questions, and doing it with young children, rather than limiting kids' access. 💻 😄 For this suggestion, I was thinking that the primary audience here is first the CS teachers, volunteers, or parents who would be introducing this to students, to help make sure they were aware of these issues so they can decide for themselves what to do. Showing something simpler to young people seems even better! Especially if it's understandable and not wall-of-legal-text. To brainstorm, I remembered how Scratch cues young people not to use their real names in the sign-in flow. This seems like a good balance maybe: From there I tried to make something subtle to the UI at the point where children are deciding what to enter as data, but that is also direct. This uses the example from Twitter earlier, where students were entering text messages to train a model that could tell who they were talking to in their family, so the buckets are "mom" and "sister": Another iteration might check the text for common things in the browser before it travels over the network (eg, names, birthdates, other potentially personally identifiable information) and warn: It's hard to find the balance between simple, clean and low-friction learning, while also helping folks who are new to machine learning to learn about how to do it right in terms of privacy and ethics, especially with young children. I'm super happy to keep brainstorming with you on this too, or to pitch in with working on this over time, and help with finding whatever you think the right balance is for your project. The simple UX is so great, and I'm excited to try out more ways of using this with kids. Thanks again for sharing your awesome work in the open! 👍 |
Contributes to: #172 Signed-off-by: Dale Lane <dale.lane@uk.ibm.com>
Contributes to: #172 Signed-off-by: Dale Lane <dale.lane@uk.ibm.com>
This came from an awesome helpful Twitter thread: https://twitter.com/dalelane/status/1106565327457054720, thanks @dalelane! 👍
I read https://cloud.ibm.com/docs/services/assistant?topic=assistant-information-security#information-security as suggesting that products using IBM services own responsibility for ethical use, and legal liability for ensuring compliance with local laws and regulations (eg, GDPR).
I also see pretty explicit guidance on personally identifiable information that doesn't seem like it's presented to end users:
To me, helping young people understand these responsibilities seems super important for teaching them how to build AI systems ethically. Would you be open to a pull request that tried to do this?
The text was updated successfully, but these errors were encountered: