Facebook’s feckless ‘Fairness Flow’ won’t fix its broken AI


Facebook today posted a blog post detailing a three-year-old solution to its modern AI problems: an algorithm inspector that only works on some of the company’s systems. Up front: Called Fairness Flow, the new diagnostic tool allows machine learning developers at Facebook to determine whether certain kinds of machine learning systems contain bias against or towards specific groups of people. It works by inspecting the data flow for a given model. Per a company blog post: To measure the performance of an algorithm’s predictions for certain groups, Fairness Flow works by dividing the data a model uses into relevant groups…

This story continues at The Next Web

Or just read more coverage about: Facebook


from The Next Web https://ift.tt/3sEnzAD

Comments