Well not much work has been carried out on my messenger for about 3 weeks now, real life got on top of me with the news that baby number 2 is on the way for me and my wife.
This combined with an exceptionally heavy work load for my web development company has meant my little instant messenger’s development has slowed to a halt recently.
I did achieve a lot with my last dev, many features such as broadcast, profiles, add friend requests were working, I added code to handle image caching and uploading, and have separated desktop and mobile via responsive web design so it renders differently depending on device.
The next big feature to sort is the chat system itself.
I thought I would take some time to talk about my thoughts on content filtering and moderation, this is something I think is missing from many big websites and social networks today which given the vast sums of money they have made puzzles me at times!
Facebook for example could have adult image recognition to mark potentially offensive content with filtering settings to show or hide it, I intend to use an adult image recognition class for this myself.
For my part I am thinking to use image recognition to detect adult content, and a keyword list for chat, which will flag content as adult and for moderation.
I think a system where content flagged as inappropriate, which an admin can elevate, which where elevated inappropriate content could be investigated, this level is more for things like pornographic images and chats picked out by the keyword list for user safety and for supporting law enforcement.
I am thinking to mix this with user content reporting and the ability to report users too for various reasons, from abuse to under age users, and encourage the community to use it, to address issues and concerns I have heard on other services.
This could be further expanded on with things such as user isolation if they repeatedly trip the system in a certain time limit, where they would see content as per usual, but no one would see theirs until there account was unlocked.
Obviously we would have banning for offenders. And perhaps a risk level setting for wrongfully reported users so the are not automatically reported so much in future etc.
I think it would be important to have names and addresses of moderator staff etc for accountability too.
Obviously not all of this will be in my build, but is more just thoughts on user safety, content moderation, and filtering. And perhaps gives me a road map to work towards.