The Naru Project at Chi Hack Night

The Naru ProjectAt Chi Hack Night, Zachary Damato and Nick Wesley from The Naru Project talked about their non-profit that focuses on incorporating artificial wetlands into our city’s river systems.

The Naru Project is facilitating the design and implementation of a park consisting of floating gardens, wildlife habitat, and more to be enjoyed by the local community.

The Naru Project is named after the ancient Akkadians word for river. In part because rivers were the heart of the Akkadian civilization, and the Naru Project want to connect our waterways to the heart of our communities.

Their works focuses on the Chicago River just east of Goose Island. The team would like to build a park that places wildlife back in the river as well as help clean up the river.

The Naru Project team explained that the river used to be full of wildlife, but that it was cleared out to make it easier for companies to ship goods through the Chicago River. The city has not always been kind to the river in the past and to this day the city still dumps sewage into the river during periods of heavy rain or snowmelt. The team hopes that by reintroducing wildlife into the river it can help the process of cleaning the river. The teams is using a series of artificial islands to give plant and animal life a place to grow. These are floating rafts that have inserts for plant life to grow. The Naru Project has just completed a 24 month study to see if these raft units can enable plant life to come back. The units not only survived *two* winters, but after the study was concluded they discovered that fish were breeding underneath the units – which the teams regards as a huge success. To get permission to do so, they had to coordinate the the US Army Corp of Engineers which has jurisdiction over water resources development in the Chicago metropolitan area. Now that the study has shown signs of success, the team is moving towards construction of the park. Here’s their Phase One Outline.

If you want to get involved in the Naru Project, you can reach out to them by emailing [email protected]. If you want to get more involved in environmental projects, check out the environment breakout group at ChiHackNight!

Join Maptime Chicago & ADA 25 this Friday!

ADA25logoPlease join us on May 22nd to discuss opportunities to improve public and open accessibility data in our community.

In particular, we would like to identify ways in which OpenStreetMap, the free, open, and community-built map of the world, can better capture characteristics of the built environment that impact how people with disabilities move throughout the city.

This is being done in partnership with ADA25 Chicago. ADA 25 Chicago will commemorate the 25th anniversary of the Americans with Disabilities Act in 2015 and leverage this milestone to improve the quality of life for people with disabilities—often considered the last frontier of civil rights. The Chicago Community Trust is the lead funder of this initiative.

Maptime Chicago is the local chapter of Maptime – whose mission is to open the doors of cartographic possibility to anyone interested by creating a time and space for collaborative learning, exploration, and map creation using mapping tools and technologies.

Maptime Chicago and Smart Chicago are working to both improve the data in OpenStreetMap and empower community members to contribute to and benefit from the map. We need your insight and feedback to ensure this work reaches the right members of the community and is primed to create lasting impact. We look forward to meeting with you.

The outcomes of this meeting will help Maptime Chicago and Smart Chicago plan a future Maptime event centered around adding and maintaining accessibility data in OpenStreetMap.

You can register for the event here!

Meeting Details

  • When: Friday, May 22nd, 12-1:30pm. Lunch will be provided.
  • Where: Chicago Community Trust (map) – 225 North Michigan Avenue, #2200. Hosted by Smart Chicago.
  • Contact: or [email protected].

Meet people where they are: new analysis on the top best practices in #civictech, according to the people who do the work

“Democracy is a conversation, not a monologue!” — US Department of Arts & Culture

Last month, I posted an open call to hear from practitioners who build tech for public good with their communities (not “for” them) how they do their work, in their own words. This framework is important: to understand the most effective approaches for creating community-led tech, we have to practice what we preach. Although, through the Experimental Modes initiative, I’ve researched and analysed best field practices “civic engagement in civic tech”, I wanted to understand if the models I found resonated with real world practice—and if not, what models did.

To this end, at the Experimental Modes convening on April 4, 2015 in Chicago, we launched a case study collection project (the “case study sprint”) on-site and published it online soon after.

Today, we share an in-depth analysis of the case studies we received. From radio activism in Mulukuku, Nicaragua to community journalism in East Palo Alto, California, question campaigns about Boston’s metro transit future to a “People’s State of the Union” held by creatives USA-wide, the case studies we assembled represent a diversity of geographies, communities, conditions, and technologies. Though they may differ from each other in many ways, and certainly from our mainstream understanding of “civic tech”, what they have in common is their approach.

Go where people are and work together

By and large, the projects documented in this case study invest energy in in-person outreach and build close relationships with individuals as well as communities in spaces they share, often by playing with, discussing, and teaching each other how to get creative with the technology that’s already there.

“Take leadership from the most impacted”

Commonly identified approaches that intersected with and went beyond the 5 Modes of Civic Engagement in Civic Tech included

  • student-led teaching (with several specific citations of the Frierian model)
  • establishing community ownership by “building with, not for”
  • embedding engagement and technical work inside demographic and communally relevant events, and
  • investing concentrated time in relationship-building before moving on to technical development

As you’ll see after reviewing our findings in full below, this study is just a start, but what it reveals about existing community technology practice is vital to consider. Putting people before tool production when it comes to civic projects isn’t just a throwaway cocktail line. It’s a series of real practices, work evident in communities across the country and the globe who make the democratizing and empowering potential of technologies real by ensuring the work they do is democratized and shares power. Whether or not these projects identify as “civic” and whether or not the tech involved meets the mainstream standard doesn’t impact whether the work is solid, genuinely collaborative, and co-created with those it seeks to help.

There’s more work to do and more we’ll learn about how to do it if we ourselves collaborate. We’re leaving the case study form open for further contribution and analysis. Share your story with us here.

Follow-up from On The Table 2015: Data Integrity for Small Businesses and Small Non-Profits

on-the-table-logoFor On The Table 2015 I met with Heidi Massey and Ben Merriman over coffee and tea in the Loop. My idea for the conversation focused on creating an open consent form template — meaning, a web form users could finish and then export as a Memorandum of Understanding (MOU), a Non-Disclosure Agreement (NDA), or a Data Sharing Agreement (DSA).

The different documents work in different contexts. Except when working with datasets protected by federal law (more on this later), calling an agreement between parties an MOU or a DSA is largely a matter of habit, while an NDA is a legally binding contract that says which types of confidential information should not be disclosed. Within legal limits, there’s nothing stopping you from writing agreements for your organization in the language and structure you prefer. Consider the purpose of the dataset, who has stakes in its integrity, and what might happen to the dataset in the future.

Often boilerplate NDAs and MOUs are kept filed by organizations. An employee, consultant, or another partner adds their details to the agreement. Both parties sign the agreement and each keeps a copy for themselves. The agreement acts as a promise that, essentially, data stays where it belongs. Violations end the data sharing relationship.

Wseedere saw problems with agreements whose force relies on the color of law and a CYA — Cover Your Ass — mentality. So we tried to imagine how the language of the agreements could promote a culture of shared best practices. The conversation followed Heidi’s idea that small nonprofits have more in common with small businesses than they do with very large nonprofits. Here’s a plain English outline for a data agreement which also works like a data integrity check list.

People who are working with shared data should understand:

  • How the data is formatted for use. This means organizing the dataset into simple tables and, for example, by using the same file type, naming conventions, and variable order.
  • The versions of the dataset. An original version of the dataset should be kept unmodified. Changes to the dataset should be made to a copy of the original version and documented in detail. The location of the original version of the dataset should be known but access restricted.
  • How long the data sharing agreement lasts. The dataset’s life cycle—how a dataset gets created, to where it can be transferred, and when, if at all, a dataset is destroyed–is just as important as a straightforward timeline for deliverables.
  • How to keep information confidential. Avoiding accidental violations of the data sharing agreement is easier when everyone who works with the dataset is familiar with its terms of use. It’s possible to define access permissions to datasets by using password protection and defining read/write roles for users. Data cleaning is a crucial part of this process to ensure that personally identifiable information is kept safe.
  • What costs come with sharing the data. This means being clear about who is in charge of updating the dataset, whether there are financial obligations associated with the data sharing process, and knowing risks associated with breaches. Federal law regulates the sharing of datasets about school children (FERPA), medical information (HIPPA), and vulnerable populations (IRBs).
  • Specific use requirements. This is the nitty-gritty of data sharing. Use requirements specify whether a dataset can be shared with third parties, what other data (if any) can be linked to the dataset, and what changes can be made to the dataset.

Ben has written extensively about the consent process as it relates to the genetic material of vulnerable populations. A vulnerable person — say, a prisoner, child, or an indigenous person — consents to give a sample of their genetic material to a researcher for a study. The genetic material gets coded into a machine readable format and aggregated into a dataset with other samples. The researchers publish their study and offer the aggregated dataset to others for study.

Bowser_Tsai

Image from Anne Bowser and Janice Tsai’s “Supporting Ethical Web Research: A New Research Ethics Review”. Copyright held by International World Wide Web Conference Committee: http://dx.doi.org/10.1145/2736277.2741654.

As it stands, though, there is no way for a person to revoke their consent once s/he gives away their genetic material. The dilemma applies not just to genetic material but any dataset that contains sensitive material. We thought people should have a say in what data counts as sensitive. An organization can limit how much data is shared in the first place. There are technical limitations and capacity limitations that stop people “in” datasets from having a voice during the dataset’s full life cycle.

For more information you can go to one of Smart Chicago’s meetups or review a list of informal groups here. The documentation is from last year’s Data Days conference as part of the Chicago School of Data project. There’s a large community in Chicago willing to teach people about data integrity. Check out Heidi’s resource list, which you can access and edit through Google.

Modeling Pension Reform at OpenGov Hack Night

IMG_7951At Chi Hack Night, the Modeling Pension Reform Breakout Group shared their work so far and helped to explain the problem with pensions in Illinois.

The breakout group, led by Ben Galewsky, David Melton, Nathan Pinger, Denis Roarty and Tim Sharko, is comprised of volunteers trying to educate public employees, pensioners, tax payers and policy makers about the math behind pension systems, the current debt, and possible solutions. The group was formed at Chicago’s OpenGov Hack Night as one of the working groups and have been meeting regularly for the past six months.

Here’s the group’s slides and highlight video:

Below, we’ve laid out some key points from the team’s presentation:

The problem with pensions

In the private sector, most companies switched from offering pensions to a 401(k) defined contribution. However, in the public sector the majority of government employees still have a pension. The main difference between a pension and a 401(k) is that with a pension you’re guaranteed a certain monthly income per month and it places the investment risk on the plan provider. With a 401(k) there’s no such guarantee and the employee assumed the investment risk. While pensions are an employee benefit, state workers also pay into the system.

According to the team, the State of Illinois has $111 billion of unfunded liabilities for the five Illinois pension systems: General Assembly Retirement System, Judges Retirement System, State Employee Retirement System,  State Universities Retirement System, and the Teachers Retirement System.

The team stated that in order make up for the shortfall, each Illinois family would have to pay a levy of at least $ 23,000 or a loss of at least $146,000 in retirement savings for every employee and retiree in these five pension plans.

The problem in Illinois became even more difficult in 1970 when the Illinois Constitution was modified to ban Illinois from reducing pension benefits once they’ve been hired by the State. Illinois recently went to a two-tier system to reduce benefits for future employees – but was constitutionally bound to keep things the same for current employees.

The teams stressed that what got the state in trouble wasn’t the stock market tanking or government employees getting raises. It’s that the state skipped their required payments and it compounded the problem. The state has a fixed cost of living allowance (COLA) increase of 3% – and right now that’s compounding at or above inflation. An added complication is that in the Teachers Retirement System, the school districts can set the amount of benefits – but they don’t aren’t responsible for paying the bill.

And the last thing the team says that makes solving pensions difficult is that taxpayers tend to fall asleep when it gets to the nitty-gritty details of pension reform.

Building an understanding of the math

To help residents better understand the pension problem, the team has been building two calculations that model the effects pension reform would have.

The first is a Pension Calculator for pensioners and interested taxpayers to enter their personal information and compare current contributions and benefits to proposed scenarios. The second is a Liability Calculator for taxpayers and policy makers to explore what the state-wide liability looks like under various scenarios.  The calculator is designed to be a  rough model of the pension systems given our limited access to data and actuarial resources.

The calculator let’s users see what effects different pension plans would have.

The team is still working on the calculators, but expects to launch soon.

Currently, the team is working on reverse engineering the State’s actuarial tables that they use to determine how the pensions will look in later years. They have a FOIA request for both the database and the calculations they use.

Getting involved

The team is currently looking for help in getting access to detailed plan data provided to actuaries, access to state actuaries staff and models, javascript developers, UX Designers, help with marketing the tool. They also would like to have input on what is politically and pragmatically feasible and if there are other approaches the team should consider.

You can join them by attending the Chi Hack Night Pension Breakout Group.

Results of the CDOT / Textizen Poll on Placemaking

CDOT Textizen Poster

CDOT Textizen Poster

As part of the CivicWorks Project, we maintain a Textizen instance so that local nonprofits and government agencies can get feedback from residents. Our most recent partnership was with the Chicago Department of Transportation and their placemaking survey.

We wanted to give a few highlights of what we learned doing the survey as well as talk about how your organization can take advantage of Textizen.

Overall Results:

Total number of participants: 2117

English: 1887

Spanish: 220

Total Texts: 13485

Completion Rate: 58.5 %

Age Range: 41% of English respondents were 15-25, 36% were 26-35

Most Active Times: 9am and 7pm

Responses to Select Questions:

I would like to see more _ for Chicago’s streets! (Multiple Choice) [English]
A. Trees & Landscaping 44
B. Seating 13%
C. Public Gathering Spaces 19%
D. Bike Amenities 17%
E. Wider Sidewalks 7%
Which events do you want to see more of in Chicago? (Multiple Choice) [English]
A. Cultural event/art 22%
B. Street Fests 23%
C. Farmer/flea markets 34%
D. Free community services 22%
Cuales eventos le gustaria ver mas en Chicago? (Multiple Choice) (Spanish)
A. Evento cultural/arte 28%
B. Mercados 22%
C. festivales en la calle 29%
D. Servicios comunitarios 21%
How do you mainly get around your neighborhood? (Multiple Choice) (English)
A. Drive 9%
B. Bike 14%
C. Walk 38%
D. Transit 38%
E. Other 1%

Mindmixer Results

The Chicago Department of Transportation also ran a Mindmixer campaign at the same time as the Textizen poll. Mindmixer helps governments get feedback from residents by letting them post ideas on different topics. One of the most popular ideas on this Mindmixer poll was the idea to create a suburban bus station on the empty lot at Michigan and Roosevelt.

The Chicago Department of Transportation will use the results of the campaigns to further develop their Complete Street design guidelines. You can find our more information about the program on the Chicago Department of Transportation website.

Textizen Record set for most participation in a Spanish Language Poll 

This CDOT campaign had the most participation out of any previous Textizen poll with 221 total responses. CDOT achieved this by deploying an equal number of ads and using different photos. CDOT also gave presentations at Spanish speaking audiences to help spread the word.

The campaign also hit several community blogs which helped spread the word throughout different neighborhoods.

Next step: Crunching numbers

The next step for CDOT is to take the Textizen and Surveymonkey results and merge them together. The team will then start to run analysis so they can give better guidance to policy makers. When the CDOT team makes their recommendations for placemaking, the document will likely have a lot of technical information.  CDOT intends to interject results from the survey into their recommendations so that they can tie their results back to people.

To keep up with the progress, you can visit http://www.chicagocompletestreets.org/ for more inforation on CDOT’s efforts.

If you think that Textizen could help you government agency or non-profit, feel free to start a conversation with us here!