According to liberals, Trump’s America has a gender problem. And now, according to research conducted at the University of Colorado at Boulder, Facial-Recognition Technology has a gender problem too. Because of course, computers are not woke and if the programmers are not woke, neither will be the computer.
First, if I may bring to life the general environment of Boulder, Colorado. CU Boulder draws thousands of intelligent free thinkers to the area. Many of whom do incredible research that contributes positively to the world in which we live. It is truly a beautiful town, but make no mistake, Boulder is so inclusive that it is exclusive. Its beauty is eclipsed by its wokeness that will get your MAGA hat ripped off your head.
Boulder is considered a utopia, or a dystopia, or a liberal swamp, all depending upon your outlook. In all sincerity, it was hit especially hard with Trump’s election in 2016. It was such a shock to the town’s culture that they found it gave comfort to their local citizens when placing “safe-space” signs in all entry-ways of their government buildings — instantly making it UNsafe for anyone wearing a MAGA hat. For more on Boulder’s culture, I highly recommend this overview from AwakenwithJP.
All of this leads me to a progressive research team, currently operating at CU Boulder. Their team of thought leaders is working on expanding their understanding and innovation of Facial-Recognition Software meant to help make this world a “safer place” for all of us.
To date, they’ve researched at CU’s Boulder and Colorado Springs Campuses. Their extensive research first launched (during Obama’s second term) in the Spring Semesters of 2012 and 2013, at the UCCS campus. It was then that they began the [highly unethical] practice of secretly photographing more than 1,700 students and faculty on that campus without any of them knowing they had been photographed for research. It happened because, of course, these woke researchers assumed everyone around them agreed 100% on the benefits of facial recognition. Watch this short interview about their “fascinating” introductory research:
All of this was done over six years ago, and now the Boulder team responsible for further evolution on facial recognition research has taken a turn that is so 2019. They’ve now concluded that like us humans, even binary-based supercomputers struggle with issues surrounding transgender identities.
With a brief glance at a single face, emerging facial recognition software can now categorize the gender of many men and women with remarkable accuracy. BUT IF that face belongs to a transgender person, such systems get it wrong more than one-third of the time, according to the new University of Colorado Boulder research.
“We found that facial analysis services performed consistently worse on transgender individuals, and were universally unable to classify non-binary genders,” said lead author Morgan Klaus Scheuerman, a Ph.D. student in the Information Science department. “While there are many different types of people out there, these systems have an extremely limited view of what gender looks like.”
And that’s not all; it’s not only about gender identity — it’s about race too.
Previous research suggests they tend to be most accurate when assessing the gender of white men but misidentify women of color as much as one-third of the time.
“We knew there were inherent biases in these systems around race and ethnicity, and we suspected there would also be problems around gender,” said senior author Jed Brubaker, an assistant professor of Information Science. “We set out to test this in the real world.”
Cue the amazing world of research on Instagram faces and gender identity. A research sample that is not at all staged; a sample that represents a utopia of facial recognition and collective mindset of a woke inclusive culture — the perfect research environment to prove that race and gender bias exists, confirmed by way of their multivariate Instagram analysis. Otherwise known as a dumpster-fire of research, validated by woke thought leaders who now know more about your gender identity and race than you do. Because of course, understanding our identities and skin color defines our thoughts and values, and it provides for a safer controlled community.
Researchers collected 2,450 images of faces from Instagram, each of which had been labeled by its owner with a hashtag indicating their gender identity. The pictures were then divided into seven groups of 350 images (#women, #man, #transwoman, #transman, #agender, #agenderqueer, #nonbinary)…..
On average, the systems were most accurate with photos of cisgender women (those born female and identifying as female), getting their gender right 98.3% of the time. They categorized cisgender men accurately 97.6% of the time. But trans men were wrongly identified as women up to 38% of the time. And those who identified as agender, genderqueer or nonbinary—indicating that they identify as neither male or female—were mischaracterized 100 percent of the time.
Essentially, their new data suggests that our binary supercomputer systems are racist and phobic and operate off of outdated stereotypes. Likely due to an inherently racist MAGA-based culture; a bi-product of binary computer logic that defines the world that still believes there are only two genders.
“These systems don’t know any other language but male or female…..” says Brubaker. The study also suggests that such services identify gender based on outdated stereotypes.
Scheuerman goes on to detail what seems to be their transgender research objective. A goal that is seemingly set on validating the transgender need of eliminating male and female gender classification all-together — inclusively making our world a better place.
“When you walk down the street, you might look at someone and presume that you know what their gender is, but that is a really quaint idea from the ’90s, and it is not what the world is like anymore,” said Brubaker. “As our vision and our cultural understanding of what gender is has evolved. The algorithms driving our technological future have not. That’s deeply problematic.”
Is it deeply problematic? Isn’t it a good thing if computers DO NOT identify or tag someone because of their race or gender? It seems that this research is so far over the line of inclusivity that it’s discriminatory regardless of how you look at it.
I would like to see a list of companies that would potentially buy this data. Why would any corporation attempt to pin down any gender or race through their facial identity? Who does that, who is not racist or discriminatory? Moreover, if the data is not for sale, why does any democracy or republic need this data? Ummmm, maybe ask China.
The progressives active in this research likely believe that they’re doing great things to save our world. But while all of this facial recognition research has been happening here in the US — millions of Hong Kong’s citizens are waving the American flag, protesting China’s facial recognition intrusion and control that has long been in place on their side of the pond.
So, friends, if you’re an anti-socialist or if you object to hidden cameras and facial recognition tagging your gender or race or political ideology, pack your umbrella. Then when you see a hidden camera tracking you, open up your umbrella, and cover your face. The people of Hong Kong will tell you that it’s what you have to do to escape “the jail” of facial recognition control. Just be aware of this factor: if too many people start over-utilizing their umbrellas for privacy, places like Boulder may decide to also ban umbrellas.
YMMV, and if so, that’s great. This topic provides an abundant amount of debate from many different perspectives. Listen to this debate on facial recognition technology and then tell us what you think: