Entrepreneur, Law & Policy Analyst helping clients w/ strategic planning, communications interoperability, Software Developer, Scotch Enthusiast.
3433 stories
·
18 followers

Signal for Help

1 Comment and 2 Shares
Comments
Read the whole story
christophersw
2 days ago
reply
Neat idea.
Baltimore, MD
jlvanderzwan
17 hours ago
reply
Share this story
Delete

Why Fake Shutters Make Me Angry

1 Comment
Comments
Read the whole story
christophersw
3 days ago
reply
This. I could not agree more - and sadly I own dumb too-small, not real, wrong-style, shudders that will be going to the trash when I repaint.
Baltimore, MD
Share this story
Delete

Gallium helps convert CO2 into Carbon and Oxygen

1 Share
Comments
Read the whole story
christophersw
3 days ago
reply
Baltimore, MD
Share this story
Delete

The Impact of Sleep Deprivation

4 Shares

Last month we wrote that we wouldn't review any more papers on test-driven development, but Fucci2020 isn't really about TDD. Instead, the authors measured how well students wrote tests in order to gauge the effects of going a night without sleep. By comparing those who slept with those who didn't, they found that a single sleepless night reduced code quality by 50%. This is consistent with what we know from a century of other studies (see here for a short summary, and here for a shorter one); I don't expect companies or universities will suddenly start paying attention to the evidence, but perhaps now that so many of us are working from home it will be easier for us to take naps when we need them.

Fucci2020 Davide Fucci, Giuseppe Scanniello, Simone Romano, and Natalia Juristo: "Need for Sleep: The Impact of a Night of Sleep Deprivation on Novice Developers' Performance". IEEE Transactions on Software Engineering, 46(1), 2020, 10.1109/tse.2018.2834900.

We present a quasi-experiment to investigate whether, and to what extent, sleep deprivation impacts the performance of novice software developers using the agile practice of test-first development (TFD). We recruited 45 undergraduates, and asked them to tackle a programming task. Among the participants, 23 agreed to stay awake the night before carrying out the task, while 22 slept normally. We analyzed the quality (i.e., the functional correctness) of the implementations delivered by the participants in both groups, their engagement in writing source code (i.e., the amount of activities performed in the IDE while tackling the programming task) and ability to apply TFD (i.e., the extent to which a participant is able to apply this practice). By comparing the two groups of participants, we found that a single night of sleep deprivation leads to a reduction of 50 percent in the quality of the implementations. There is notable evidence that the developers' engagement and their prowess to apply TFD are negatively impacted. Our results also show that sleep-deprived developers make more fixes to syntactic mistakes in the source code. We conclude that sleep deprivation has possibly disruptive effects on software development activities. The results open opportunities for improving developers' performance by integrating the study of sleep with other psycho-physiological factors in which the software engineering research community has recently taken an interest in.
Read the whole story
christophersw
4 days ago
reply
Baltimore, MD
jlvanderzwan
4 days ago
reply
acdha
4 days ago
reply
Washington, DC
Share this story
Delete

Pentagon Wants AI to Predict Events Before They Occur

1 Comment


What if by leveraging today's artificial intelligence to predict events several days in advance, countries like the United States could simply avoid warfare in the first place?

It sounds like the ultimate form of deterrence, a strategy that would save everyone all sorts of trouble and it's the type of visionary thinking that is driving U.S. military commanders and senior defense policymakers toward the rapid adoption of artificial intelligence (AI)-enabled situational awareness platforms.

In July 2021, the North American Aerospace Defense Command (NORAD) and U.S. Northern Command (NORTHCOM) conducted a third series of tests called the Global Information Dominance Experiments (GIDE), in collaboration with leaders from 11 combatant commands. The first and second series of tests took place in December 2020 and March 2021, respectively. The tests were designed to occur in phases, each demonstrating the current capabilities of three interlinked AI-enabled tools called Cosmos, Lattice, and Gaia.

Gaia provides real-time situational awareness for any geographic location, comprised from many different classified and unclassified data sources—massive volumes of satellite imagery, communications data, intelligence reports, and a variety of sensor data. Lattice offers real-time threat tracking and response options. Cosmos allows for strategic and cloud-based collaboration across many different commands. Together, these decision tools are supposed to anticipate what adversaries will do ahead of time, allowing U.S. military leaders to preempt the actions of adversaries before kinetic conflict arises and deny them any perceived benefits from taking any predicted actions.

Such tools are particularly attractive to U.S. defense leaders as they prepare for compressed decision times in the future due to greater use of AI on the battlefield.

They also invoke several popular buzzwords floating around the Beltway, including information dominance, decision superiority, integrated deterrence, and joint all domain command and control (JADC2). In a speech at a one-day conference of the National Security Commission on Artificial Intelligence (NACAI), U.S. Defense Secretary, Lloyd Austin touted the importance of AI for supporting integrated deterrence, expressing his intent to use "the right mix of technology, operational concepts, and capabilities—all woven together in a networked way that is so credible, flexible, and formidable that it will give any adversary pause."

These AI-enabled platforms are expected to go beyond merely providing enhanced situational awareness and better early warning to offer U.S. military leaders what is considered the holy grail of operational planning—producing strategic warning of adversarial actions in the gray zone (i.e., the competition phase), prior to any irreversible moves having been made. Such an advancement would allow decision-makers to formulate proactive options (rather than the reactive ones of the past) and enable much faster decisions.

It's tempting ask: What could possibly go wrong? Everyone knows the canon of sci-fi novels and films that explore the dangerous pitfalls of AI-enabled systems—including Minority Report, The Forbin Project, and War Games. The idea is also oddly reminiscent of the Soviet intelligence program known as RYaN, which was designed to anticipate a nuclear attack based on data indicators and computer assessments.

Assembling a truly unbiased dataset designed to predict specific outcomes remains a major challenge, especially for life and death situations and in areas of sparse data availability such as a nuclear conflict.

During the 1980s, the KGB wanted to predict the start of a nuclear war as much as six months to a full year in advance from a wide variety of indicators—e.g., physical locations of U.S. nuclear warheads and monitored activities at American embassies and NATO, unplanned movement of senior officials, FEMA preparations, military exercises and alerts, scheduled weapons maintenance, leave policies for soldiers, visa approvals and travel information, and U.S. foreign intelligence activities. They even considered the removal of documents related to the American Revolution from public display as a potential indicator of war. Massive amounts of data were fed into a computer model to "calculate and monitor the correlation of forces, including military, economy, and psychological factors, to assign numbers and relative weights." The findings from RYaN contributed to Soviet paranoia about a pending U.S. nuclear attack in 1983 and nearly led their leadership to start a nuclear war.

Though such an idea came long before its time, today's machine learning technologies are now capable of detecting subtle patterns in seemingly random data and could start making accurate predictions about adversaries in the near-term. Amidst the wellspring of enthusiasm for AI-enabled decision tools, U.S. defense leaders are hoping to deflect any concerns by insisting that their adoption will be responsible, humans will remain in the loop, and any systems that produce unintended consequences will be taken offline.

However, national security experts such as Paul Scharre, Michael Horowitz, and many others point out the critical technical hurdles that will need to be overcome before the benefits of using AI-enabled tools outweigh the potential risks. Though much useful data already exists for plugging into machine learning algorithms, assembling a truly unbiased dataset designed to predict specific outcomes remains a major challenge, especially for life and death situations and in areas of sparse data availability such as a nuclear conflict.

The complexity of the real world offers another major obstacle. To function properly, machine learning tools require accurate models of how the world works, but their accuracy depends largely on human understanding of the world and how it evolves. Since such complexity often defies human understanding, AI-enabled systems are likely to behave in unexpected ways. And even if a machine learning tool overcomes these hurdles and functions properly, the problem of explainability may prevent policymakers from trusting them if they are not able to understand how the tool generated various outcomes.

Leveraging AI-enabled tools to make better decisions is one thing, but using them to predict adversarial actions in order to preempt them is an entirely different ballgame. In addition to raising philosophical questions about free will and inevitability, it is unclear whether any proactive actions taken in response to predicted adversarial behavior might be perceived by the other side as aggressive and end up catalyzing the war we sought to avoid in the first place.



Read the whole story
christophersw
7 days ago
reply
Well, that's a lot more useful than predicting them after...
Baltimore, MD
Share this story
Delete

Metro 7000-Series Safety Problems ‘Could Have Resulted In A Catastrophic Event’

1 Comment

Federal safety investigators said problems with wheels on 7000-series Metro trains could have led to a “catastrophic incident,” and said the problems were widespread and longstanding. Metro has known about the issues since 2017, according to National Transportation Safety Board investigators, who found problems with wheelsets on dozens of 7000-series Metro cars.

Late Sunday night, the Metrorail Safety Commission ordered WMATA to remove all 7000-series trains from service as the investigation continues. The trains account for roughly 60% of Metro’s fleet, and without them, riders saw significant delays during this morning’s commute.

Investigators laid out their initial findings at a press conference this morning, following last week’s Blue Line derailment near the Arlington Cemetery station. The train appears to have had multiple minor derailments and re-railments throughout that day, according to the NTSB. Investigators found pieces of brake discs, apparently from the derailed train, near the Largo and Rosslyn stations. The brake pieces apparently became dislodged when the train left the track, officials said.

As for the preliminary cause of the derailment, investigators say the wheels moved outward on the axle, causing problems at the rail switch near the Arlington Cemetery Station. During the derailment, the electrified third rail was damaged, which could have caused a fire. No one was injured during the incident.

According to the NTSB, Metro has reported 31 wheel assembly failures on 7000-series trains since 2017. An additional 21 cars were found to have the issue. Investigators have inspected 514 of the 748 railcars, so additional problems could be found, officials said.

“We are fortunate that no fatalities or serious injuries occurred as a result of any of these derailments,” said NTSB Chair Jennifer Homendy. “But the potential for fatalities and serious injuries was significant. This could have resulted in a catastrophic event.”

The train that derailed was last inspected for wheel alignment on July 27, 2021, according to safety officials. Metro says trains are inspected every 90 days. The train was due for its next inspection on October 27.

Homendy encouraged other transit agencies that use Kawasaki-made trains to check for the issue. The 7000-series trains were made in Lincoln, Nebraska. Kawasaki has also built trains for VRE and MARC locally and the MBTA in Boston, SEPTA in Philadelphia, MTA in New York City and the LIRR and PATH commuter trains in the NYC region.

Riders across the region reported significant delays during Monday’s disrupted commute. Trains were supposed to arrive every 30 minutes on all lines Monday morning, but service levels reached 60-plus minutes on several lines, including the Red Line.

But with only 40 trainsets available to run service, platforms at Takoma, Fort Totten, and L’Enfant Plaza were unusually full of riders.

Metro tweeted an apology this morning, saying the move was made “out of an abundance of caution.”

“We understand the impact this decision has on transportation for the DMV area (National Capital Region),” Metro said in the tweet. “We apologize for this reduction in service and the inconvenience this is causing our customers.”

They also acknowledged crowded cars and said face masks continue to be required and Metrorail cars recycle the air approximately every three minutes.

It’s unclear how long the service impacts will last. It’s also unclear if trains could be returned one by one after they are inspected or if they will all be put back in service once the entire issue is resolved. The independent Metrorail Safety Commission is in charge of that decision, but says it won’t know more on that until Metro submits its corrective action plan. That timeline is also unknown.

The post Metro 7000-Series Safety Problems ‘Could Have Resulted In A Catastrophic Event’ appeared first on DCist.

Read the whole story
christophersw
8 days ago
reply
Wow... So much for "Back to good".
Baltimore, MD
Share this story
Delete
Next Page of Stories