TensorFlow Constrained Optimization Example Using CelebA Dataset

This notebook demonstrates an easy way to create and optimize constrained problems using the TFCO library. This method can be useful in improving models when we find that they’re not performing equally well across different slices of our data, which we can identify using Fairness Indicators. The second of Google’s AI principles states that our technology should avoid creating or reinforcing unfair bias, and we believe this technique can help improve model fairness in some situations. In particular, this notebook will:

  • Train a simple, unconstrained neural network model to detect a person's smile in images using tf.keras and the large-scale CelebFaces Attributes (CelebA) dataset.
  • Evaluate model performance against a commonly used fairness metric across age groups, using Fairness Indicators.
  • Set up a simple constrained optimization problem to achieve fairer performance across age groups.
  • Retrain the now constrained model and evaluate performance again, ensuring that our chosen fairness metric has improved.

Last updated: 3/11 Feb 2020

Installation

This notebook was created in Colaboratory, connected to the Python 3 Google Compute Engine backend. If you wish to host this notebook in a different environment, then you should not experience any major issues provided you include all the required packages in the cells below.

Note that the very first time you run the pip installs, you may be asked to restart the runtime because of preinstalled out of date packages. Once you do so, the correct packages will be used.


!pip install git+https://github.com/google-research/tensorflow_constrained_optimization
!pip install -q tensorflow-datasets tensorflow
!pip install fairness-indicators \
  "absl-py==0.8.0" \
  "pyarrow<0.17,>=0.16" \
  "apache-beam<3,>=2.20" \
  "avro-python3==1.9.1" \
  "pyzmq==17.0.0"

Collecting git+https://github.com/google-research/tensorflow_constrained_optimization
  Cloning https://github.com/google-research/tensorflow_constrained_optimization to /tmp/pip-req-build-bk9s0kps
  Running command git clone -q https://github.com/google-research/tensorflow_constrained_optimization /tmp/pip-req-build-bk9s0kps
Requirement already satisfied: numpy in /home/kbuilder/.local/lib/python3.6/site-packages (from tfco-nightly==0.3.dev20200612) (1.18.5)
Requirement already satisfied: scipy in /home/kbuilder/.local/lib/python3.6/site-packages (from tfco-nightly==0.3.dev20200612) (1.4.1)
Requirement already satisfied: six in /home/kbuilder/.local/lib/python3.6/site-packages (from tfco-nightly==0.3.dev20200612) (1.15.0)
Requirement already satisfied: tensorflow>=1.14 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tfco-nightly==0.3.dev20200612) (2.2.0)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.1.2)
Requirement already satisfied: tensorflow-estimator<2.3.0,>=2.2.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (2.2.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (3.2.1)
Requirement already satisfied: wrapt>=1.11.1 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.29.0)
Requirement already satisfied: astunparse==1.6.3 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.6.3)
Requirement already satisfied: google-pasta>=0.1.8 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (0.2.0)
Requirement already satisfied: gast==0.3.3 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (0.3.3)
Requirement already satisfied: absl-py>=0.7.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (0.9.0)
Requirement already satisfied: protobuf>=3.8.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (3.12.2)
Requirement already satisfied: termcolor>=1.1.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.1.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (0.34.2)
Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (2.2.2)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (2.10.0)
Requirement already satisfied: setuptools in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from protobuf>=3.8.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (46.4.0)
Requirement already satisfied: google-auth<2,>=1.6.3 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.17.1)
Requirement already satisfied: werkzeug>=0.11.15 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.0.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (0.4.1)
Requirement already satisfied: markdown>=2.6.8 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (3.2.2)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.6.0.post3)
Requirement already satisfied: requests<3,>=2.21.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (2.23.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/lib/python3/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (0.2.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (4.1.0)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3" in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (4.2)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /home/kbuilder/.local/lib/python3.6/site-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.6.1)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (1.22)
Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (2.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (2018.1.18)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/lib/python3/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (3.0.4)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/lib/python3/dist-packages (from rsa<5,>=3.1.4; python_version >= "3"->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (0.4.2)
Requirement already satisfied: oauthlib>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /home/kbuilder/.local/lib/python3.6/site-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow>=1.14->tfco-nightly==0.3.dev20200612) (3.1.0)
Building wheels for collected packages: tfco-nightly
  Building wheel for tfco-nightly (setup.py) ... [?25l- \ done
[?25h  Created wheel for tfco-nightly: filename=tfco_nightly-0.3.dev20200612-py3-none-any.whl size=144191 sha256=20308ffc75505360ceb3c57ed2024c001ab3b4748666dabc7079034423cc185d
  Stored in directory: /tmp/pip-ephem-wheel-cache-6h0nx0yp/wheels/6b/76/1e/08bd1d997a17f406d9e56d289668e56ec43e6d7cd7f269b698
Successfully built tfco-nightly
Installing collected packages: tfco-nightly
Successfully installed tfco-nightly-0.3.dev20200612
Collecting fairness-indicators
  Using cached fairness_indicators-0.1.2-py3-none-any.whl (48 kB)
Collecting absl-py==0.8.0
  Downloading absl-py-0.8.0.tar.gz (102 kB)
[K     |████████████████████████████████| 102 kB 2.8 MB/s 
[?25hCollecting pyarrow<0.17,>=0.16
  Using cached pyarrow-0.16.0-cp36-cp36m-manylinux2014_x86_64.whl (63.1 MB)
Collecting apache-beam<3,>=2.20
  Using cached apache_beam-2.22.0-cp36-cp36m-manylinux1_x86_64.whl (3.4 MB)
Collecting avro-python3==1.9.1
  Downloading avro-python3-1.9.1.tar.gz (36 kB)
Collecting pyzmq==17.0.0
  Downloading pyzmq-17.0.0-cp36-cp36m-manylinux1_x86_64.whl (3.1 MB)
[K     |████████████████████████████████| 3.1 MB 8.6 MB/s 
[?25hRequirement already satisfied: tensorflow<3,>=1.15 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from fairness-indicators) (2.2.0)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from fairness-indicators) (0.34.2)
Collecting tensorflow-data-validation<1,>=0.15.0
  Using cached tensorflow_data_validation-0.22.0-cp36-cp36m-manylinux2010_x86_64.whl (2.4 MB)
Collecting witwidget<2,>=1.4.4
  Using cached witwidget-1.6.0-py3-none-any.whl (2.3 MB)
Requirement already satisfied: setuptools>=40.2.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from fairness-indicators) (46.4.0)
Collecting tensorflow-model-analysis<1,>=0.21.0
  Using cached tensorflow_model_analysis-0.22.1-py3-none-any.whl (1.6 MB)
Requirement already satisfied: six in /home/kbuilder/.local/lib/python3.6/site-packages (from absl-py==0.8.0) (1.15.0)
Requirement already satisfied: numpy>=1.14 in /home/kbuilder/.local/lib/python3.6/site-packages (from pyarrow<0.17,>=0.16) (1.18.5)
Collecting oauth2client<4,>=2.0.1
  Downloading oauth2client-3.0.0.tar.gz (77 kB)
[K     |████████████████████████████████| 77 kB 9.5 MB/s 
[?25hRequirement already satisfied: pydot<2,>=1.2.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from apache-beam<3,>=2.20) (1.4.1)
Requirement already satisfied: pytz>=2018.3 in /home/kbuilder/.local/lib/python3.6/site-packages (from apache-beam<3,>=2.20) (2020.1)
Collecting typing-extensions<3.8.0,>=3.7.0
  Using cached typing_extensions-3.7.4.2-py3-none-any.whl (22 kB)
Requirement already satisfied: python-dateutil<3,>=2.8.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from apache-beam<3,>=2.20) (2.8.1)
Collecting fastavro<0.24,>=0.21.4
  Using cached fastavro-0.23.4-cp36-cp36m-manylinux2010_x86_64.whl (1.4 MB)
Collecting pymongo<4.0.0,>=3.8.0
  Using cached pymongo-3.10.1-cp36-cp36m-manylinux2014_x86_64.whl (460 kB)
Requirement already satisfied: grpcio<2,>=1.12.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from apache-beam<3,>=2.20) (1.29.0)
Requirement already satisfied: dill<0.3.2,>=0.3.1.1 in /home/kbuilder/.local/lib/python3.6/site-packages (from apache-beam<3,>=2.20) (0.3.1.1)
Requirement already satisfied: future<1.0.0,>=0.18.2 in /home/kbuilder/.local/lib/python3.6/site-packages (from apache-beam<3,>=2.20) (0.18.2)
Processing /home/kbuilder/.cache/pip/wheels/3e/0c/c3/26ad975f80274d6bf73ed4d8facd055648f452428bc1623283/hdfs-2.5.8-py3-none-any.whl
Requirement already satisfied: httplib2<0.18.0,>=0.8 in /usr/lib/python3/dist-packages (from apache-beam<3,>=2.20) (0.9.2)
Requirement already satisfied: protobuf<4,>=3.5.0.post1 in /home/kbuilder/.local/lib/python3.6/site-packages (from apache-beam<3,>=2.20) (3.12.2)
Collecting mock<3.0.0,>=1.0.1
  Using cached mock-2.0.0-py2.py3-none-any.whl (56 kB)
Processing /home/kbuilder/.cache/pip/wheels/ac/bb/07/adfb4ffd0aaace2022ea25c082a7cdc688b10d30e86d6d2fde/crcmod-1.7-cp36-cp36m-linux_x86_64.whl
Requirement already satisfied: wrapt>=1.11.1 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (1.12.1)
Requirement already satisfied: astunparse==1.6.3 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (1.6.3)
Requirement already satisfied: opt-einsum>=2.3.2 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (3.2.1)
Requirement already satisfied: tensorflow-estimator<2.3.0,>=2.2.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (2.2.0)
Requirement already satisfied: gast==0.3.3 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (0.3.3)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (1.1.2)
Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (2.2.2)
Requirement already satisfied: google-pasta>=0.1.8 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (0.2.0)
Requirement already satisfied: termcolor>=1.1.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (1.1.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (2.10.0)
Requirement already satisfied: scipy==1.4.1; python_version >= "3" in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow<3,>=1.15->fairness-indicators) (1.4.1)
Requirement already satisfied: tensorflow-metadata<0.23,>=0.22 in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow-data-validation<1,>=0.15.0->fairness-indicators) (0.22.2)
Collecting joblib<0.15,>=0.12
  Using cached joblib-0.14.1-py2.py3-none-any.whl (294 kB)
Collecting pandas<1,>=0.24
  Downloading pandas-0.25.3-cp36-cp36m-manylinux1_x86_64.whl (10.4 MB)
[K     |████████████████████████████████| 10.4 MB 26.8 MB/s 
[?25hCollecting tensorflow-transform<0.23,>=0.22
  Using cached tensorflow_transform-0.22.0-py3-none-any.whl (326 kB)
Collecting tfx-bsl<0.23,>=0.22
  Using cached tfx_bsl-0.22.0-cp36-cp36m-manylinux2010_x86_64.whl (2.0 MB)
Collecting google-api-python-client>=1.7.8
  Using cached google_api_python_client-1.9.3-py3-none-any.whl (59 kB)
Requirement already satisfied: ipywidgets>=7.0.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from witwidget<2,>=1.4.4->fairness-indicators) (7.5.1)
Requirement already satisfied: jupyter<2,>=1.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from witwidget<2,>=1.4.4->fairness-indicators) (1.0.0)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/lib/python3/dist-packages (from oauth2client<4,>=2.0.1->apache-beam<3,>=2.20) (0.4.2)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/lib/python3/dist-packages (from oauth2client<4,>=2.0.1->apache-beam<3,>=2.20) (0.2.1)
Requirement already satisfied: rsa>=3.1.4 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from oauth2client<4,>=2.0.1->apache-beam<3,>=2.20) (4.2)
Requirement already satisfied: pyparsing>=2.1.4 in /home/kbuilder/.local/lib/python3.6/site-packages (from pydot<2,>=1.2.0->apache-beam<3,>=2.20) (2.4.7)
Processing /home/kbuilder/.cache/pip/wheels/3f/2a/fa/4d7a888e69774d5e6e855d190a8a51b357d77cc05eb1c097c9/docopt-0.6.2-py2.py3-none-any.whl
Requirement already satisfied: requests>=2.7.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam<3,>=2.20) (2.23.0)
Collecting pbr>=0.11
  Using cached pbr-5.4.5-py2.py3-none-any.whl (110 kB)
Requirement already satisfied: google-auth<2,>=1.6.3 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (1.17.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (0.4.1)
Requirement already satisfied: markdown>=2.6.8 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (3.2.2)
Requirement already satisfied: werkzeug>=0.11.15 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (1.0.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (1.6.0.post3)
Requirement already satisfied: googleapis-common-protos in /home/kbuilder/.local/lib/python3.6/site-packages (from tensorflow-metadata<0.23,>=0.22->tensorflow-data-validation<1,>=0.15.0->fairness-indicators) (1.52.0)
Collecting tensorflow-serving-api<3,>=1.15
  Using cached tensorflow_serving_api-2.2.0-py2.py3-none-any.whl (38 kB)
Collecting google-auth-httplib2>=0.0.3
  Using cached google_auth_httplib2-0.0.3-py2.py3-none-any.whl (6.3 kB)
Collecting google-api-core<2dev,>=1.18.0
  Using cached google_api_core-1.20.0-py2.py3-none-any.whl (90 kB)
Collecting uritemplate<4dev,>=3.0.0
  Using cached uritemplate-3.0.1-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: ipython>=4.0.0; python_version >= "3.3" in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (7.15.0)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (3.5.1)
Requirement already satisfied: ipykernel>=4.5.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (5.3.0)
Requirement already satisfied: traitlets>=4.3.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (4.3.3)
Requirement already satisfied: nbformat>=4.2.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (4.4.0)
Requirement already satisfied: nbconvert in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (5.6.1)
Requirement already satisfied: jupyter-console in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (6.1.0)
Requirement already satisfied: notebook in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (6.0.3)
Requirement already satisfied: qtconsole in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (4.7.4)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/lib/python3/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam<3,>=2.20) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam<3,>=2.20) (2018.1.18)
Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam<3,>=2.20) (2.6)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/lib/python3/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam<3,>=2.20) (1.22)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /home/kbuilder/.local/lib/python3.6/site-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (4.1.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /home/kbuilder/.local/lib/python3.6/site-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (1.6.1)
Requirement already satisfied: decorator in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (4.4.2)
Requirement already satisfied: pygments in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (2.6.1)
Requirement already satisfied: jedi>=0.10 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (0.17.0)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (3.0.5)
Requirement already satisfied: pickleshare in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (0.7.5)
Requirement already satisfied: pexpect; sys_platform != "win32" in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (4.8.0)
Requirement already satisfied: backcall in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (0.2.0)
Requirement already satisfied: tornado>=4.2 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipykernel>=4.5.1->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (6.0.4)
Requirement already satisfied: jupyter-client in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from ipykernel>=4.5.1->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (6.1.3)
Requirement already satisfied: ipython-genutils in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from traitlets>=4.3.1->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (0.2.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/lib/python3/dist-packages (from nbformat>=4.2.0->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (2.6.0)
Requirement already satisfied: jupyter-core in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from nbformat>=4.2.0->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (4.6.3)
Requirement already satisfied: entrypoints>=0.2.2 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (0.3)
Requirement already satisfied: defusedxml in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (0.6.0)
Requirement already satisfied: mistune<2,>=0.8.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (0.8.4)
Requirement already satisfied: jinja2>=2.4 in /usr/lib/python3/dist-packages (from nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (2.10)
Requirement already satisfied: bleach in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (3.1.5)
Requirement already satisfied: testpath in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (0.4.4)
Requirement already satisfied: pandocfilters>=1.4.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (1.4.2)
Requirement already satisfied: Send2Trash in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from notebook->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (1.5.0)
Requirement already satisfied: terminado>=0.8.1 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from notebook->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (0.8.3)
Requirement already satisfied: prometheus-client in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from notebook->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (0.8.0)
Requirement already satisfied: qtpy in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from qtconsole->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (1.9.0)
Requirement already satisfied: oauthlib>=3.0.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /home/kbuilder/.local/lib/python3.6/site-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow<3,>=1.15->fairness-indicators) (3.1.0)
Requirement already satisfied: parso>=0.7.0 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from jedi>=0.10->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (0.7.0)
Requirement already satisfied: wcwidth in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (0.2.4)
Requirement already satisfied: ptyprocess>=0.5 in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from pexpect; sys_platform != "win32"->ipython>=4.0.0; python_version >= "3.3"->ipywidgets>=7.0.0->witwidget<2,>=1.4.4->fairness-indicators) (0.6.0)
Requirement already satisfied: packaging in /home/kbuilder/.local/lib/python3.6/site-packages (from bleach->nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (20.4)
Requirement already satisfied: webencodings in /tmpfs/src/tf_docs_env/lib/python3.6/site-packages (from bleach->nbconvert->jupyter<2,>=1.0->witwidget<2,>=1.4.4->fairness-indicators) (0.5.1)
Building wheels for collected packages: absl-py, avro-python3, oauth2client
  Building wheel for absl-py (setup.py) ... [?25l- done
[?25h  Created wheel for absl-py: filename=absl_py-0.8.0-py3-none-any.whl size=120986 sha256=cde63b66e69edd0cf6590e96433168e9b3dd240c1d3a3108d32dfdbffad63e56
  Stored in directory: /home/kbuilder/.cache/pip/wheels/32/47/ca/883a37d072420003605e16ae92810e7305b06aa93b13111381
  Building wheel for avro-python3 (setup.py) ... [?25l- done
[?25h  Created wheel for avro-python3: filename=avro_python3-1.9.1-py3-none-any.whl size=43202 sha256=ef741fb816b66cc7d5d5cb3e271db41d6e9d32029c4dc917def2d4b6f830d2ae
  Stored in directory: /home/kbuilder/.cache/pip/wheels/c9/d9/de/bff6c77bcc38ff270f812917ec5de9ff8ec943bbb7e3b9100e
  Building wheel for oauth2client (setup.py) ... [?25l- done
[?25h  Created wheel for oauth2client: filename=oauth2client-3.0.0-py3-none-any.whl size=106383 sha256=e1543f77ce888acca00204d741064ab09bd56aa521abe229d20c817cee4a0f64
  Stored in directory: /home/kbuilder/.cache/pip/wheels/85/84/41/0db9b5f02fab88d266e64a52c5a468a3a70f6d331e75ec0e49
Successfully built absl-py avro-python3 oauth2client
ERROR: qtconsole 4.7.4 has requirement pyzmq>=17.1, but you'll have pyzmq 17.0.0 which is incompatible.
ERROR: witwidget 1.6.0 has requirement oauth2client>=4.1.3, but you'll have oauth2client 3.0.0 which is incompatible.
Installing collected packages: absl-py, joblib, oauth2client, typing-extensions, avro-python3, fastavro, pymongo, pyarrow, docopt, hdfs, pbr, mock, crcmod, apache-beam, pandas, google-auth-httplib2, google-api-core, uritemplate, google-api-python-client, tensorflow-serving-api, tfx-bsl, tensorflow-transform, tensorflow-data-validation, witwidget, tensorflow-model-analysis, fairness-indicators, pyzmq
  Attempting uninstall: absl-py
    Found existing installation: absl-py 0.9.0
    Not uninstalling absl-py at /home/kbuilder/.local/lib/python3.6/site-packages, outside environment /tmpfs/src/tf_docs_env
    Can't uninstall 'absl-py'. No files were found to uninstall.
  Attempting uninstall: joblib
    Found existing installation: joblib 0.15.1
    Not uninstalling joblib at /home/kbuilder/.local/lib/python3.6/site-packages, outside environment /tmpfs/src/tf_docs_env
    Can't uninstall 'joblib'. No files were found to uninstall.
  Attempting uninstall: pandas
    Found existing installation: pandas 1.0.4
    Not uninstalling pandas at /home/kbuilder/.local/lib/python3.6/site-packages, outside environment /tmpfs/src/tf_docs_env
    Can't uninstall 'pandas'. No files were found to uninstall.
  Attempting uninstall: pyzmq
    Found existing installation: pyzmq 19.0.1
    Uninstalling pyzmq-19.0.1:
      Successfully uninstalled pyzmq-19.0.1
Successfully installed absl-py-0.8.0 apache-beam-2.22.0 avro-python3-1.9.1 crcmod-1.7 docopt-0.6.2 fairness-indicators-0.1.2 fastavro-0.23.4 google-api-core-1.20.0 google-api-python-client-1.9.3 google-auth-httplib2-0.0.3 hdfs-2.5.8 joblib-0.14.1 mock-2.0.0 oauth2client-3.0.0 pandas-0.25.3 pbr-5.4.5 pyarrow-0.16.0 pymongo-3.10.1 pyzmq-17.0.0 tensorflow-data-validation-0.22.0 tensorflow-model-analysis-0.22.1 tensorflow-serving-api-2.2.0 tensorflow-transform-0.22.0 tfx-bsl-0.22.0 typing-extensions-3.7.4.2 uritemplate-3.0.1 witwidget-1.6.0

Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X.


import os
import sys
import tempfile
import urllib

import tensorflow as tf
from tensorflow import keras

import tensorflow_datasets as tfds
tfds.disable_progress_bar()

import numpy as np

import tensorflow_constrained_optimization as tfco

Additionally, we add a few imports that are specific to Fairness Indicators which we will use to evaluate and visualize the model's performance.


import tensorflow_model_analysis as tfma
import fairness_indicators as fi
from google.protobuf import text_format
import apache_beam as beam

Although TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default as it is in TensorFlow 2.x. To ensure that nothing breaks, eager execution will be enabled in the cell below.


if tf.__version__ < "2.0.0":
  tf.compat.v1.enable_eager_execution()
  print("Eager execution enabled.")
else:
  print("Eager execution enabled by default.")

print("TensorFlow " + tf.__version__)
print("TFMA " + tfma.VERSION_STRING)
print("TFDS " + tfds.version.__version__)
print("FI " + fi.version.__version__)
Eager execution enabled by default.
TensorFlow 2.2.0
TFMA 0.22.1
TFDS 3.1.0
FI 0.1.2

CelebA Dataset

CelebA is a large-scale face attributes dataset with more than 200,000 celebrity images, each with 40 attribute annotations (such as hair type, fashion accessories, facial features, etc.) and 5 landmark locations (eyes, mouth and nose positions). For more details take a look at the paper. With the permission of the owners, we have stored this dataset on Google Cloud Storage and mostly access it via TensorFlow Datasets(tfds).

In this notebook:

  • Our model will attempt to classify whether the subject of the image is smiling, as represented by the "Smiling" attribute*.
  • Images will be resized from 218x178 to 28x28 to reduce the execution time and memory when training.
  • Our model's performance will be evaluated across age groups, using the binary "Young" attribute. We will call this "age group" in this notebook.

* While there is little information available about the labeling methodology for this dataset, we will assume that the "Smiling" attribute was determined by a pleased, kind, or amused expression on the subject's face. For the purpose of this case study, we will take these labels as ground truth.

gcs_base_dir = "gs://celeb_a_dataset/"
celeb_a_builder = tfds.builder("celeb_a", data_dir=gcs_base_dir, version='2.0.0')

celeb_a_builder.download_and_prepare()

num_test_shards_dict = {'0.3.0': 4, '2.0.0': 2} # Used because we download the test dataset separately
version = str(celeb_a_builder.info.version)
print('Celeb_A dataset version: %s' % version)
Celeb_A dataset version: 2.0.0


local_root = tempfile.mkdtemp(prefix='test-data')
def local_test_filename_base():
  return local_root

def local_test_file_full_prefix():
  return os.path.join(local_test_filename_base(), "celeb_a-test.tfrecord")

def copy_test_files_to_local():
  filename_base = local_test_file_full_prefix()
  num_test_shards = num_test_shards_dict[version]
  for shard in range(num_test_shards):
    url = "https://storage.googleapis.com/celeb_a_dataset/celeb_a/%s/celeb_a-test.tfrecord-0000%s-of-0000%s" % (version, shard, num_test_shards)
    filename = "%s-0000%s-of-0000%s" % (filename_base, shard, num_test_shards)
    res = urllib.request.urlretrieve(url, filename)

Caveats

Before moving forward, there are several considerations to keep in mind in using CelebA:

  • Although in principle this notebook could use any dataset of face images, CelebA was chosen because it contains public domain images of public figures.
  • All of the attribute annotations in CelebA are operationalized as binary categories. For example, the "Young" attribute (as determined by the dataset labelers) is denoted as either present or absent in the image.
  • CelebA's categorizations do not reflect real human diversity of attributes.
  • For the purposes of this notebook, the feature containing the "Young" attribute is referred to as "age group", where the presence of the "Young" attribute in an image is labeled as a member of the "Young" age group and the absence of the "Young" attribute is labeled as a member of the "Not Young" age group. These are assumptions made as this information is not mentioned in the original paper.
  • As such, performance in the models trained in this notebook is tied to the ways the attributes have been operationalized and annotated by the authors of CelebA.
  • This model should not be used for commercial purposes as that would violate CelebA's non-commercial research agreement.

Setting Up Input Functions

The subsequent cells will help streamline the input pipeline as well as visualize performance.

First we define some data-related variables and define a requisite preprocessing function.


ATTR_KEY = "attributes"
IMAGE_KEY = "image"
LABEL_KEY = "Smiling"
GROUP_KEY = "Young"
IMAGE_SIZE = 28

def preprocess_input_dict(feat_dict):
  # Separate out the image and target variable from the feature dictionary.
  image = feat_dict[IMAGE_KEY]
  label = feat_dict[ATTR_KEY][LABEL_KEY]
  group = feat_dict[ATTR_KEY][GROUP_KEY]

  # Resize and normalize image.
  image = tf.cast(image, tf.float32)
  image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
  image /= 255.0

  # Cast label and group to float32.
  label = tf.cast(label, tf.float32)
  group = tf.cast(group, tf.float32)

  feat_dict[IMAGE_KEY] = image
  feat_dict[ATTR_KEY][LABEL_KEY] = label
  feat_dict[ATTR_KEY][GROUP_KEY] = group

  return feat_dict

get_image_and_label = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY])
get_image_label_and_group = lambda feat_dict: (feat_dict[IMAGE_KEY], feat_dict[ATTR_KEY][LABEL_KEY], feat_dict[ATTR_KEY][GROUP_KEY])

Then, we build out the data functions we need in the rest of the colab.

# Train data returning either 2 or 3 elements (the third element being the group)
def celeb_a_train_data_wo_group(batch_size):
  celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)
  return celeb_a_train_data.map(get_image_and_label)
def celeb_a_train_data_w_group(batch_size):
  celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict)
  return celeb_a_train_data.map(get_image_label_and_group)

# Test data for the overall evaluation
celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group)
# Copy test data locally to be able to read it into tfma
copy_test_files_to_local()

Build a simple DNN Model

Because this notebook focuses on TFCO, we will assemble a simple, unconstrained tf.keras.Sequential model.

We may be able to greatly improve model performance by adding some complexity (e.g., more densely-connected layers, exploring different activation functions, increasing image size), but that may distract from the goal of demonstrating how easy it is to apply the TFCO library when working with Keras. For that reason, the model will be kept simple — but feel encouraged to explore this space.

def create_model():
  # For this notebook, accuracy will be used to evaluate performance.
  METRICS = [
    tf.keras.metrics.BinaryAccuracy(name='accuracy')
  ]

  # The model consists of:
  # 1. An input layer that represents the 28x28x3 image flatten.
  # 2. A fully connected layer with 64 units activated by a ReLU function.
  # 3. A single-unit readout layer to output real-scores instead of probabilities.
  model = keras.Sequential([
      keras.layers.Flatten(input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3), name='image'),
      keras.layers.Dense(64, activation='relu'),
      keras.layers.Dense(1, activation=None)
  ])

  # TFCO by default uses hinge loss — and that will also be used in the model.
  model.compile(
      optimizer=tf.keras.optimizers.Adam(0.001),
      loss='hinge',
      metrics=METRICS)
  return model

We also define a function to set seeds to ensure reproducible results. Note that this colab is meant as an educational tool and does not have the stability of a finely tuned production pipeline. Running without setting a seed may lead to varied results.

def set_seeds():
  np.random.seed(121212)
  tf.compat.v1.set_random_seed(212121)

Fairness Indicators Helper Functions

Before training our model, we define a number of helper functions that will allow us to evaluate the model's performance via Fairness Indicators.

First, we create a helper function to save our model once we train it.

def save_model(model, subdir):
  base_dir = tempfile.mkdtemp(prefix='saved_models')
  model_location = os.path.join(base_dir, subdir)
  model.save(model_location, save_format='tf')
  return model_location

Next, we define functions used to preprocess the data in order to correctly pass it through to TFMA.


def tfds_filepattern_for_split(dataset_name, split):
  return f"{local_test_file_full_prefix()}*"

class PreprocessCelebA(object):
  """Class that deserializes, decodes and applies additional preprocessing for CelebA input."""
  def __init__(self, dataset_name):
    builder = tfds.builder(dataset_name)
    self.features = builder.info.features
    example_specs = self.features.get_serialized_info()
    self.parser = tfds.core.example_parser.ExampleParser(example_specs)

  def __call__(self, serialized_example):
    # Deserialize
    deserialized_example = self.parser.parse_example(serialized_example)
    # Decode
    decoded_example = self.features.decode_example(deserialized_example)
    # Additional preprocessing
    image = decoded_example[IMAGE_KEY]
    label = decoded_example[ATTR_KEY][LABEL_KEY]
    # Resize and scale image.
    image = tf.cast(image, tf.float32)
    image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
    image /= 255.0
    image = tf.reshape(image, [-1])
    # Cast label and group to float32.
    label = tf.cast(label, tf.float32)

    group = decoded_example[ATTR_KEY][GROUP_KEY]
    
    output = tf.train.Example()
    output.features.feature[IMAGE_KEY].float_list.value.extend(image.numpy().tolist())
    output.features.feature[LABEL_KEY].float_list.value.append(label.numpy())
    output.features.feature[GROUP_KEY].bytes_list.value.append(b"Young" if group.numpy() else b'Not Young')
    return output.SerializeToString()

def tfds_as_pcollection(beam_pipeline, dataset_name, split):
  return (
      beam_pipeline
   | 'Read records' >> beam.io.ReadFromTFRecord(tfds_filepattern_for_split(dataset_name, split))
   | 'Preprocess' >> beam.Map(PreprocessCelebA(dataset_name))
  )

Finally, we define a function that evaluates the results in TFMA.

def get_eval_results(model_location, eval_subdir):
  base_dir = tempfile.mkdtemp(prefix='saved_eval_results')
  tfma_eval_result_path = os.path.join(base_dir, eval_subdir)

  eval_config_pbtxt = """
        model_specs {
          label_key: "%s"
        }
        metrics_specs {
          metrics {
            class_name: "FairnessIndicators"
            config: '{ "thresholds": [0.22, 0.5, 0.75] }'
          }
          metrics {
            class_name: "ExampleCount"
          }
        }
        slicing_specs {}
        slicing_specs { feature_keys: "%s" }
        options {
          compute_confidence_intervals { value: False }
          disabled_outputs{values: "analysis"}
        }
      """ % (LABEL_KEY, GROUP_KEY)
      
  eval_config = text_format.Parse(eval_config_pbtxt, tfma.EvalConfig())

  eval_shared_model = tfma.default_eval_shared_model(
        eval_saved_model_path=model_location, tags=[tf.saved_model.SERVING])


  # Run the fairness evaluation.
  with beam.Pipeline() as pipeline:
    _ = (
          tfds_as_pcollection(pipeline, 'celeb_a', 'test')
          | 'ExtractEvaluateAndWriteResults' >>
          tfma.ExtractEvaluateAndWriteResults(
              eval_config=eval_config,
              eval_shared_model=eval_shared_model,
              output_path=tfma_eval_result_path)
    )
  return tfma.load_eval_result(output_path=tfma_eval_result_path)

Train & Evaluate Unconstrained Model

With the model now defined and the input pipeline in place, we’re now ready to train our model. To cut back on the amount of execution time and memory, we will train the model by slicing the data into small batches with only a few repeated iterations.

Note that running this notebook in TensorFlow < 2.0.0 may result in a deprecation warning for np.where. Safely ignore this warning as TensorFlow addresses this in 2.X by using tf.where in place of np.where.

BATCH_SIZE = 32

# Set seeds to get reproducible results
set_seeds()

model_unconstrained = create_model()
model_unconstrained.fit(celeb_a_train_data_wo_group(BATCH_SIZE), epochs=5, steps_per_epoch=1000)
Epoch 1/5
1000/1000 [==============================] - 8s 8ms/step - loss: 0.5038 - accuracy: 0.7733
Epoch 2/5
1000/1000 [==============================] - 8s 8ms/step - loss: 0.3800 - accuracy: 0.8301
Epoch 3/5
1000/1000 [==============================] - 8s 8ms/step - loss: 0.3598 - accuracy: 0.8427
Epoch 4/5
1000/1000 [==============================] - 11s 11ms/step - loss: 0.3435 - accuracy: 0.8474
Epoch 5/5
1000/1000 [==============================] - 8s 8ms/step - loss: 0.3402 - accuracy: 0.8479

<tensorflow.python.keras.callbacks.History at 0x7f76b0390198>

Evaluating the model on the test data should result in a final accuracy score of just over 85%. Not bad for a simple model with no fine tuning.

print('Overall Results, Unconstrained')
celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group)
results = model_unconstrained.evaluate(celeb_a_test_data)
Overall Results, Unconstrained
19962/19962 [==============================] - 43s 2ms/step - loss: 0.2125 - accuracy: 0.8636

However, performance evaluated across age groups may reveal some shortcomings.

To explore this further, we evaluate the model with Fairness Indicators (via TFMA). In particular, we are interested in seeing whether there is a significant gap in performance between "Young" and "Not Young" categories when evaluated on false positive rate.

A false positive error occurs when the model incorrectly predicts the positive class. In this context, a false positive outcome occurs when the ground truth is an image of a celebrity 'Not Smiling' and the model predicts 'Smiling'. By extension, the false positive rate, which is used in the visualization above, is a measure of accuracy for a test. While this is a relatively mundane error to make in this context, false positive errors can sometimes cause more problematic behaviors. For instance, a false positive error in a spam classifier could cause a user to miss an important email.

model_location = save_model(model_unconstrained, 'model_export_unconstrained')
eval_results_unconstrained = get_eval_results(model_location, 'eval_results_unconstrained')
WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py:1817: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

INFO:tensorflow:Assets written to: /tmp/saved_modelsmnw_zzdv/model_export_unconstrained/assets

INFO:tensorflow:Assets written to: /tmp/saved_modelsmnw_zzdv/model_export_unconstrained/assets
WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.

Warning:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_and_plots_serialization.py:122: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

Warning:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.6/site-packages/tensorflow_model_analysis/writers/metrics_and_plots_serialization.py:122: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`

As mentioned above, we are concentrating on the false positive rate. The current version of Fairness Indicators (0.1.2) selects false negative rate by default. After running the line below, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.

tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_results_unconstrained)
FairnessIndicatorViewer(slicingMetrics=[{'sliceValue': 'Overall', 'slice': 'Overall', 'metrics': {'fairness_in…

As the results show above, we do see a disproportionate gap between "Young" and "Not Young" categories.

This is where TFCO can help by constraining the false positive rate to be within a more acceptable criterion.

Constrained Model Set Up

As documented in TFCO's library, there are several helpers that will make it easier to constrain the problem:

  1. tfco.rate_context() – This is what will be used in constructing a constraint for each age group category.
  2. tfco.RateMinimizationProblem()– The rate expression to be minimized here will be the false positive rate subject to age group. In other words, performance now will be evaluated based on the difference between the false positive rates of the age group and that of the overall dataset. For this demonstration, a false positive rate of less than or equal to 5% will be set as the constraint.
  3. tfco.ProxyLagrangianOptimizerV2() – This is the helper that will actually solve the rate constraint problem.

The cell below will call on these helpers to set up model training with the fairness constraint.

# The batch size is needed to create the input, labels and group tensors.
# These tensors are initialized with all 0's. They will eventually be assigned
# the batch content to them. A large batch size is chosen so that there are
# enough number of "Young" and "Not Young" examples in each batch.
set_seeds()
model_constrained = create_model()
BATCH_SIZE = 32

# Create input tensor.
input_tensor = tf.Variable(
    np.zeros((BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, 3), dtype="float32"),
    name="input")

# Create labels and group tensors (assuming both labels and groups are binary).
labels_tensor = tf.Variable(
    np.zeros(BATCH_SIZE, dtype="float32"), name="labels")
groups_tensor = tf.Variable(
    np.zeros(BATCH_SIZE, dtype="float32"), name="groups")

# Create a function that returns the applied 'model' to the input tensor
# and generates constrained predictions.
def predictions():
  return model_constrained(input_tensor)

# Create overall context and subsetted context.
# The subsetted context contains subset of examples where group attribute < 1
# (i.e. the subset of "Not Young" celebrity images).
# "groups_tensor < 1" is used instead of "groups_tensor == 0" as the former
# would be a comparison on the tensor value, while the latter would be a
# comparison on the Tensor object.
context = tfco.rate_context(predictions, labels=lambda:labels_tensor)
context_subset = context.subset(lambda:groups_tensor < 1)

# Setup list of constraints.
# In this notebook, the constraint will just be: FPR to less or equal to 5%.
constraints = [tfco.false_positive_rate(context_subset) <= 0.05]

# Setup rate minimization problem: minimize overall error rate s.t. constraints.
problem = tfco.RateMinimizationProblem(tfco.error_rate(context), constraints)

# Create constrained optimizer and obtain train_op.
# Separate optimizers are specified for the objective and constraints
optimizer = tfco.ProxyLagrangianOptimizerV2(
      optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
      constraint_optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
      num_constraints=problem.num_constraints)

# A list of all trainable variables is also needed to use TFCO.
var_list = (model_constrained.trainable_weights + problem.trainable_variables +
            optimizer.trainable_variables())

The model is now set up and ready to be trained with the false positive rate constraint across age group.

Now, because the last iteration of the constrained model may not necessarily be the best performing model in terms of the defined constraint, the TFCO library comes equipped with tfco.find_best_candidate_index() that can help choose the best iterate out of the ones found after each epoch. Think of tfco.find_best_candidate_index() as an added heuristic that ranks each of the outcomes based on accuracy and fairness constraint (in this case, false positive rate across age group) separately with respect to the training data. That way, it can search for a better trade-off between overall accuracy and the fairness constraint.

The following cells will start the training with constraints while also finding the best performing model per iteration.

# Obtain train set batches.

NUM_ITERATIONS = 100  # Number of training iterations.
SKIP_ITERATIONS = 10  # Print training stats once in this many iterations.

# Create temp directory for saving snapshots of models.
temp_directory = tempfile.mktemp()
os.mkdir(temp_directory)

# List of objective and constraints across iterations.
objective_list = []
violations_list = []

# Training iterations.
iteration_count = 0
for (image, label, group) in celeb_a_train_data_w_group(BATCH_SIZE):
  # Assign current batch to input, labels and groups tensors.
  input_tensor.assign(image)
  labels_tensor.assign(label)
  groups_tensor.assign(group)

  # Run gradient update.
  optimizer.minimize(problem, var_list=var_list)

  # Record objective and violations.
  objective = problem.objective()
  violations = problem.constraints()

  sys.stdout.write(
      "\r Iteration %d: Hinge Loss = %.3f, Max. Constraint Violation = %.3f"
      % (iteration_count + 1, objective, max(violations)))

  # Snapshot model once in SKIP_ITERATIONS iterations.
  if iteration_count % SKIP_ITERATIONS == 0:
    objective_list.append(objective)
    violations_list.append(violations)

    # Save snapshot of model weights.
    model_constrained.save_weights(
        temp_directory + "/celeb_a_constrained_" +
        str(iteration_count / SKIP_ITERATIONS) + ".h5")

  iteration_count += 1
  if iteration_count >= NUM_ITERATIONS:
    break

# Choose best model from recorded iterates and load that model.
best_index = tfco.find_best_candidate_index(
    np.array(objective_list), np.array(violations_list))

model_constrained.load_weights(
    temp_directory + "/celeb_a_constrained_" + str(best_index) + ".0.h5")

# Remove temp directory.
os.system("rm -r " + temp_directory)
 Iteration 100: Hinge Loss = 0.614, Max. Constraint Violation = 0.268
0

After having applied the constraint, we evaluate the results once again using Fairness Indicators.

model_location = save_model(model_constrained, 'model_export_constrained')
eval_result_constrained = get_eval_results(model_location, 'eval_results_constrained')
INFO:tensorflow:Assets written to: /tmp/saved_models7u5s2mne/model_export_constrained/assets

INFO:tensorflow:Assets written to: /tmp/saved_models7u5s2mne/model_export_constrained/assets

As with the previous time we used Fairness Indicators, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.

Note that to fairly compare the two versions of our model, it is important to use thresholds that set the overall false positive rate to be roughly equal. This ensures that we are looking at actual change as opposed to just a shift in the model equivalent to simply moving the threshold boundary. In our case, comparing the unconstrained model at 0.5 and the constrained model at 0.22 provides a fair comparison for the models.

eval_results_dict = {
    'constrained': eval_result_constrained,
    'unconstrained': eval_results_unconstrained,
}
tfma.addons.fairness.view.widget_view.render_fairness_indicator(multi_eval_results=eval_results_dict)
FairnessIndicatorViewer(evalName='constrained', evalNameCompare='unconstrained', slicingMetrics=[{'sliceValue'…

With TFCO's ability to express a more complex requirement as a rate constraint, we helped this model achieve a more desirable outcome with little impact to the overall performance. There is, of course, still room for improvement, but at least TFCO was able to find a model that gets close to satisfying the constraint and reduces the disparity between the groups as much as possible.