Instagram Introduces Filters To Clean Up Your Stream

ig-logoFor any social network that allows comments and replies to posts, which is most of them, the comments section can be a real mess. Everything from profanity and crude humor to outright cyberbullying and hate speech can be the result of an innocent post, and Instagram is no exception. Starting this week, they are doing something about it.

Yesterday Instagram announced that it is rolling out new tools to help users filter what kinds of comments they see, in the hope that the user experience for the average, non-hater user will be improved. These settings have been available to high profile accounts since the summer.

In the words of Instagram CEO Kevin Systrom:

“To empower each individual, we need to promote a culture where everyone feels safe to be themselves without criticism or harassment. It’s not only my personal wish to do this, I believe it’s also our responsibility as a company. So, today, we’re taking the next step to ensure Instagram remains a positive place to express yourself.”

To achieve this, Instagram is introducing a new set of filters that will allow users to control what they see in their comments, or more specifically to control which types of public comments can never be directed at them by other users.

ig-filters

In an example that we see too often, a young user will post a selfie and in the worst-case scenario, will receive replies like, “You’re ugly and you should kill yourself.”

One option for users is to have Instagram block all comments containing words and phrases that are often reported as inappropriate. We assume that the word ugly, as well as other terms used in personal attacks, would be on this list. An additional option for users is to filter out a custom list of keywords that the user supplies.

The new filters are available as soon as today (they are for me) by visiting the Settings -> Comments on your mobile device or your computer. FYI, if you’re going to build a large keyword list, it is probably easier to do it on a computer.

We think this is a very positive step.

 

 

If you are worried that your teen or tween is at risk or acting inappropriately online, we can help. The ThirdParent initial audit is now FREE (previously a $49 value). Ongoing monitoring is $15 per month and you can cancel at any time. Click here to sign up today!

Contact ThirdParent any time for help and resources for monitoring child and teen internet activity.

Work at a high school or college? We have custom solutions for monitoring dangerous or inappropriate activity. Learn more.

Follow us on Twitter or Facebook for more news and information on keeping your teens safe online. You can also sign up for our weekly newsletter below.

 

Twitter Is Not Likely To Eliminate Abuse

I’m a big fan of Twitter. Yes, I use it for work and our brand here at ThirdParent, but I also use it personally – a lot. It really is the best way to stay up to date on current events as they happen, and hear real time thoughts from leaders in literally every field.

Twitter logoThe main problem with Twitter is abuse and abusive users.

We’ve written about abuse and Twitter before – here and here and here. Twitter has been talking about abuse for a while, and sound like they have good intentions, but each tweak that they implement on the platform seems to come up short.

Last night, Twitter reported earnings and on the call CEO Jack Dorsey made comments that lead us to believe that they will never be able to make it a safe environment for some users. Dorsey’s comments in full:

“This is Jack. This is really, really important to me and to everyone at the company. So, I want to address both freedom of expression and safety together here, since the two intertwine.

We are not and never will be a platform that shows people only part of what’s happening or part of what’s being said. We are the place for news and social commentary. And at its best, the nature of our platform empowers people to reach across divides, and to build connections, to share ideas and to challenge accepted norms.

As part of that, we hope – and we also recognize it’s a high hope – to elevate civil discourse. And I emphasize civil discourse there. Abuse is not part of civil discourse. It shuts down conversation. It prevents us from understanding each other. Freedom of expression means little if we allow voices to be silenced because of fear of harassment if they speak up. No one deserves to be the target of abuse online, and it has no place on Twitter.

We haven’t been good enough at ensuring that’s the case, and we must do better. That means building new technology solutions, making sure our policies and enforcement are consistent, and educating people about both. We’ve made improvements in the first half of the year, and we’re going to make more. We named safety as one of our top five priorities for this year, and recent events have only confirmed that this is truly one of the most important things for us to improve, and has motivated us to improve even faster.”

Why are we skeptical that they can stomp out abuse? There are indications that they don’t want to. Consider this sentence:

“We are not and never will be a platform that shows people only part of what’s happening or part of what’s being said.”

In any discourse, harsh disagreements, criticism and arguments are at times part of what is being said. Twitter wants to preserve that real discourse on its platform. To get rid of abuse entirely, they would be forced to manually review every reported interaction and decide where the fine line is between civil and uncivil disagreement. That’s pretty much impossible if they intend to let users speak their mind and err on the side of assuming users are innocent until totally proven guilty.

Instead, it appears that they want users to self-police, and “elevate civil discourse”. That is a nice goal but it won’t happen. There will always be some users who are genuinely mean, or get a kick out of trolling others. Twitter won’t be able to get this right without taking more extreme steps, unfortunately.

 

 

If you are worried that your teen or tween is at risk, we can help. The ThirdParent initial audit is now FREE (previously a $49 value). Ongoing monitoring is $15 per month and you can cancel at any time. Click here to sign up today!

 

 

Contact ThirdParent any time for help and resources for monitoring child and teen internet activity.

Work at a high school or college? We have custom solutions for monitoring dangerous or inappropriate activity. Learn more.

 

Follow us on Twitter or Facebook for more news and information on keeping your teens safe online. You can also sign up for our weekly newsletter below.

 

Twitter 10k Debate Misses the Boat

Twitter has been under fire on a number of fronts of late. Wall Street is unhappy because user growth has been disappointing. Non-users find the platform confusing or too much work. Some (many?) users are unhappy because Twitter seems to be unable or unwilling to tackle abuse (we wrote last week that Twitter may be getting more serious about it).

Yesterday a whole new ruckus broke out when Re/Code broke the story that Twitter is considering raising the maximum tweet size from 140 characters to 10,000. On the face of it, that’s a huge change, and hardcore users appear to be very wary of the potential transformation.

twitter-10000

Twitter CEO Jack Dorsey responded to the early criticism (on Twitter, of course) confirming that they are indeed going to test longer tweets.

Dorsey-Twitter-10000

If the change is implemented the way we think it will be, we think it will provide a better user experience, even for the users who are dreading it. If the new long tweets are permitted along the lines of the picture below, this will be no big deal. That tweet is from a very good analysis by Dave Winer, and depicts a short tweet with a link to the rest of the longer text housed inside the Twitter platform, not externally. In this case, (a) the look and user interface doesn’t change much at all, and (b) if you do want to read more, you’ll be reading the post inside Twitter, and not clicking a link that could be malware or anything else.

long-tweet

We are in favor of this change but really hope that Twitter’s announcement last week that it will focus on abuse and abusive users is the real deal.

Twitter-abuse

Sure, free speech is important, but the folks at Twitter can do a better job of proactively deleting tweets that are clearly abusive, and educating or punishing abusers accordingly. Doing so might actually help Twitter’s user growth problem. Millions of Twitter “users” don’t actually have an account or don’t log into it if they do; they browse Twitter logged out when looking for breaking news and opinion or whatever else they’re interested in.

 

 

 

NEW: For a limited time the ThirdParent audit is FREE (normally $49). You can cancel at any time. Sign up today!

 

Contact ThirdParent any time for help and resources for monitoring child and teen internet activity.

Work at a high school or college? We have custom solutions for monitoring dangerous or inappropriate activity. Learn more.

Follow us on Twitter or Facebook for more news and information on keeping your teens safe online. You can also sign up for our weekly newsletter below.

 

Reddit Ends Shadowbans, Intros Suspensions

Unless you’re an avid Reddit user, you might not know what a shadowban is. Even if you are a Reddit user, you might not know, as many users who do get shadowbanned don’t figure it out for weeks or months.

reddit-snooThe shadowban was Reddit’s thoroughly inelegant solution to deal with, among other things, abusive users. When a user is shadowbanned (usually for abusing another redditor and getting reported for it) he can keep posting, but his posts, comments and votes are invisible to everyone else on the site. In Reddit’s own words, the Shadowban “is great for dealing with bots/spam rings but woefully inadequate for real human beings.”

Reddit previewed back in May that they would be taking a number of steps to make the site more user-friendly and free of harassment. That is easier said than done, since the site is cherished by (many of) its users as a bastion of free speech.

This week Reddit is taking what we think is a very positive step forward in its delicate balancing act between allowing free speech and users who are out to harass others. Account suspensions are now a thing, replacing shadowbans. According to a related post, suspensions can be handed down by Reddit employees (not by unpaid moderators) for the following actions:

  • Posting anything illegal, Revenge Porn or spam
  • Inciting or encouraging violence
  • Threatening, harassing or cyberbullying
  • Divulging others’ personal information (Reddit is mostly anonymous)
  • Impersonating others (does not include parodies)

Suspended users with not be able to post, comment, vote or message other users throughout the term of the suspension.

Reddit maintains that the suspension is a better solution, and we agree. Suspended users will be notified via private message, and will have the opportunity to appeal. They will know what they did wrong, and have incentive to not do it again. Suspensions can last from one day to an outright ban, which is also warranted in some cases from what we’ve seen.

The fact that there is some mature content on Reddit might make the site inappropriate for some teens. There is however, a wealth of valuable content there. Both of my teen boys use Reddit, one of them almost daily. If Reddit’s efforts succeed in making the experience safer for users, it’s a step in the right direction.

 

 

Contact ThirdParent any time for help and resources for monitoring child and teen internet activity.

Work at a high school or college? We have custom solutions for monitoring dangerous or inappropriate activity. Learn more.

Follow us on Twitter or Facebook for more news and information on keeping your teens safe online. You can also sign up for our weekly newsletter below.