Breaking News

Texas Sports Hall of Fame recognizes Oglesby Broncos linebacker Drew Sanders suffers Achilles injury during offseason program Premiere of “The Science of African Women” documentary by The Women for Africa Foundation Group sparks conflict between businesses and customers in downtown Santa Barbara State’s long-term economic climate linked to health care challenges | News, Sports, Employment opportunities

According to reports, the Israeli armed forces are using an AI program to select human targets for killing in their fight against Hamas. This program, known as “Lavender,” identifies potential fighters among the 2.3 million inhabitants of Gaza based on various data points and has a claimed accuracy rate of 90%. During the initial weeks of the war, the Israeli Air Force conducted a significant number of strikes in the Gaza Strip, often targeting several hundred sites per day with minimal human testing before an attack.

In previous conflicts, only key commanders were selected as human targets and were meticulously vetted before any attack. However, following a terrorist attack by Hamas, the military leadership deemed this process too time-consuming and permitted the use of “Lavender” for target selection. The program was developed to identify potential fighters among the 2.3 million inhabitants of Gaza based on various data points.

The program identified around 37,000 Palestinians as suspected Hamas fighters. A secondary program called “Where’s Dad?” was used to track the movements of these targets and signal when they entered their homes, allowing for airstrikes targeting militants in their residences. However, some critics have raised ethical concerns about the use of AI in targeting operations, especially given the high number of civilian casualties in the conflict.

If confirmed, the use of AI for target selection without independent verification would raise serious ethical questions and potentially implicate the Israeli armed forces in war crimes. The program’s use of data algorithms to identify targets and locations has sparked controversy, with some officials praising its effectiveness and others condemning its potential to harm innocent civilians. This has drawn criticism from international observers who question whether this type of targeting decision is proportionate and ethically sound.

In conclusion, while AI technology has proven useful in identifying potential threats quickly and efficiently, it raises serious ethical concerns when used indiscriminately or without proper oversight. As such, there is a need for careful consideration and regulation when it comes to using AI in military operations to ensure that they do not lead to unnecessary civilian casualties or violations of human rights laws.

Israeli forces have been under fire for their tactics during Operation Protective Edge against Hamas militants in Gaza Strip last year.

During this conflict period Israeli Air Force targeted hundreds sites daily with minimal human testing before an attack with Lavender program which autonomously selected human targets for killing.

This was done by identifying around 37 thousand Palestinians as suspected Hamas fighters with an accuracy rate of 90 percent.

A secondary program called “Where’s Dad?” was used to track their movements allowing airstrikes on militants entering their homes.

Leave a Reply