Queer European MD passionate about IT
Эх сурвалжийг харах

Merge pull request #170 from dataquestio/darin-solutions-120722

Update Mission149Solutions.ipynb
darinbradley 2 жил өмнө
parent
commit
a406abab14

+ 6 - 6
Mission149Solutions.ipynb

@@ -59,7 +59,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Comparing across all degree categories"
+    "## Comparing Across All Degree Categories"
    ]
   },
   {
@@ -158,7 +158,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Hiding x-axis labels"
+    "## Hiding X-Axis Labels"
    ]
   },
   {
@@ -248,7 +248,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Setting y-axis labels"
+    "## Setting Y-Axis Labels"
    ]
   },
   {
@@ -341,7 +341,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Adding a horizontal line"
+    "## Adding a Horizontal Line"
    ]
   },
   {
@@ -437,7 +437,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Exporting to a file"
+    "## Exporting to a File"
    ]
   },
   {
@@ -549,7 +549,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.6.1"
+   "version": "3.8.5"
   }
  },
  "nbformat": 4,

+ 15 - 15
Mission155Solutions.ipynb

@@ -4,7 +4,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Introduction To The Data Set"
+    "## Introduction To The Dataset"
    ]
   },
   {
@@ -762,7 +762,7 @@
     }
    ],
    "source": [
-    "# Confirm that there's no more missing values!\n",
+    "# Confirm that there are no more missing values!\n",
     "numeric_cars.isnull().sum()"
    ]
   },
@@ -827,15 +827,15 @@
     "    knn = KNeighborsRegressor()\n",
     "    np.random.seed(1)\n",
     "        \n",
-    "    # Randomize order of rows in data frame.\n",
+    "    # Randomize order of rows in DataFrame.\n",
     "    shuffled_index = np.random.permutation(df.index)\n",
     "    rand_df = df.reindex(shuffled_index)\n",
     "\n",
     "    # Divide number of rows in half and round.\n",
     "    last_train_row = int(len(rand_df) / 2)\n",
     "    \n",
-    "    # Select the first half and set as training set.\n",
-    "    # Select the second half and set as test set.\n",
+    "    # Select the first half, and set as training set.\n",
+    "    # Select the second half, and set as test set.\n",
     "    train_df = rand_df.iloc[0:last_train_row]\n",
     "    test_df = rand_df.iloc[last_train_row:]\n",
     "    \n",
@@ -956,15 +956,15 @@
     "def knn_train_test(train_col, target_col, df):\n",
     "    np.random.seed(1)\n",
     "        \n",
-    "    # Randomize order of rows in data frame.\n",
+    "    # Randomize order of rows in DataFrame.\n",
     "    shuffled_index = np.random.permutation(df.index)\n",
     "    rand_df = df.reindex(shuffled_index)\n",
     "\n",
     "    # Divide number of rows in half and round.\n",
     "    last_train_row = int(len(rand_df) / 2)\n",
     "    \n",
-    "    # Select the first half and set as training set.\n",
-    "    # Select the second half and set as test set.\n",
+    "    # Select the first half, and set as training set.\n",
+    "    # Select the second half, and set as test set.\n",
     "    train_df = rand_df.iloc[0:last_train_row]\n",
     "    test_df = rand_df.iloc[last_train_row:]\n",
     "    \n",
@@ -1100,15 +1100,15 @@
     "def knn_train_test(train_cols, target_col, df):\n",
     "    np.random.seed(1)\n",
     "    \n",
-    "    # Randomize order of rows in data frame.\n",
+    "    # Randomize order of rows in DataFrame.\n",
     "    shuffled_index = np.random.permutation(df.index)\n",
     "    rand_df = df.reindex(shuffled_index)\n",
     "\n",
     "    # Divide number of rows in half and round.\n",
     "    last_train_row = int(len(rand_df) / 2)\n",
     "    \n",
-    "    # Select the first half and set as training set.\n",
-    "    # Select the second half and set as test set.\n",
+    "    # Select the first half, and set as training set.\n",
+    "    # Select the second half, and set as test set.\n",
     "    train_df = rand_df.iloc[0:last_train_row]\n",
     "    test_df = rand_df.iloc[last_train_row:]\n",
     "    \n",
@@ -1266,15 +1266,15 @@
     "def knn_train_test(train_cols, target_col, df):\n",
     "    np.random.seed(1)\n",
     "    \n",
-    "    # Randomize order of rows in data frame.\n",
+    "    # Randomize order of rows in DataFrame.\n",
     "    shuffled_index = np.random.permutation(df.index)\n",
     "    rand_df = df.reindex(shuffled_index)\n",
     "\n",
     "    # Divide number of rows in half and round.\n",
     "    last_train_row = int(len(rand_df) / 2)\n",
     "    \n",
-    "    # Select the first half and set as training set.\n",
-    "    # Select the second half and set as test set.\n",
+    "    # Select the first half, and set as training set.\n",
+    "    # Select the second half, and set as test set.\n",
     "    train_df = rand_df.iloc[0:last_train_row]\n",
     "    test_df = rand_df.iloc[last_train_row:]\n",
     "    \n",
@@ -1364,7 +1364,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.6"
+   "version": "3.8.5"
   }
  },
  "nbformat": 4,

+ 6 - 6
Mission165Solutions.ipynb

@@ -525,7 +525,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## How many rows in the data set?"
+    "## How many rows are in the dataset?"
    ]
   },
   {
@@ -632,9 +632,9 @@
     "collapsed": true
    },
    "source": [
-    "### Observation 1: By default -- 31 numeric columns and 21 string columns.\n",
+    "### Observation 1: By default  31 numeric columns and 21 string columns.\n",
     "\n",
-    "### Observation 2: It seems like one column in particular (the `id` column) is being cast to int64 in the last 2 chunks but not in the earlier chunks. Since the `id` column won't be useful for analysis, visualization, or predictive modelling let's ignore this column.\n",
+    "### Observation 2: It seems like one column in particular (the `id` column) is being cast to int64 in the last 2 chunks but not in the earlier chunks. Since the `id` column won't be useful for analysis, visualization, or predictive modeling, let's ignore this column.\n",
     "\n",
     "## How many unique values are there in each string column? How many of the string columns contain values that are less than 50% unique?"
    ]
@@ -797,7 +797,7 @@
    "source": [
     "## Optimizing String Columns\n",
     "\n",
-    "### Determine which string columns you can convert to a numeric type if you clean them. Let's focus on columns that would actually be useful for analysis and modelling."
+    "### Determine which string columns you can convert to a numeric type if you clean them. Let's focus on columns that would actually be useful for analysis and modeling."
    ]
   },
   {
@@ -1108,7 +1108,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "### Convert to category"
+    "### Convert to category."
    ]
   },
   {
@@ -1613,7 +1613,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.8.2"
+   "version": "3.8.5"
   }
  },
  "nbformat": 4,

+ 3 - 3
Mission167Solutions.ipynb

@@ -205,7 +205,7 @@
    "metadata": {},
    "outputs": [],
    "source": [
-    "# Drop columns representing URL's or containing way too many missing values (>90% missing)\n",
+    "# Drop columns representing URLs or containing too many missing values (>90% missing)\n",
     "drop_cols = ['investor_permalink', 'company_permalink', 'investor_category_code']\n",
     "keep_cols = chunk.columns.drop(drop_cols)"
    ]
@@ -656,7 +656,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Loading Chunks Into SQLite"
+    "## Loading Chunks into SQLite"
    ]
   },
   {
@@ -690,7 +690,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.8.2"
+   "version": "3.8.5"
   }
  },
  "nbformat": 4,

+ 10 - 10
Mission177Solutions.ipynb

@@ -28,7 +28,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "We chose a dictionary where the keys are the stock symbols and the values are DataFrames with the from the corresponding CSV file.\n",
+    "We chose a dictionary where the keys are the stock symbols and the values are DataFrames from the corresponding CSV file.\n",
     "\n",
     "Let's display the data stored for the `aapl` stock symbol:"
    ]
@@ -146,7 +146,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Computing Average Closing Prices "
+    "## Computing average closing prices "
    ]
   },
   {
@@ -791,21 +791,21 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "It appears the `amzn` and `aapl` have the highest average closing prices, while `blfs`, and `apdn` have the lowest average closing prices."
+    "It appears the `amzn` and `aapl` have the highest average closing prices, while `blfs` and `apdn` have the lowest average closing prices."
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "# Organizing the trades per day"
+    "# Organizing the Trades Per Day"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "We are going to calculate a dictionary where the keys are the days and the values are list of pairs `(volume, stock_symbol)` of all trades that occurred on that day."
+    "We are going to calculate a dictionary where the keys are the days and the values are lists of pairs `(volume, stock_symbol)` of all trades that occurred on that day."
    ]
   },
   {
@@ -830,14 +830,14 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "# Finding The Most Traded Stock Each Day"
+    "# Finding the Most Traded Stock Each Day"
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Calculate a dictionary there the keys are the days and the value of each day is a pair `(volume, stock_symbol)` with the most traded stock symbol on that day."
+    "Calculate a dictionary where the keys are the days and the value of each day is a pair `(volume, stock_symbol)` with the most traded stock symbol on that day."
    ]
   },
   {
@@ -857,7 +857,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Verify a few of the results"
+    "## Verify a Few of the Results"
    ]
   },
   {
@@ -887,7 +887,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "# Searching For High Volume Days"
+    "# Searching for High Volume Days"
    ]
   },
   {
@@ -998,7 +998,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.7.4"
+   "version": "3.8.5"
   }
  },
  "nbformat": 4,

+ 5 - 5
Mission188Solution.ipynb

@@ -186,7 +186,7 @@
    "source": [
     "# %load functions.py\n",
     "def process_missing(df):\n",
-    "    \"\"\"Handle various missing values from the data set\n",
+    "    \"\"\"Handle various missing values from the dataset\n",
     "\n",
     "    Usage\n",
     "    ------\n",
@@ -439,11 +439,11 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "The `SibSp` column shows the number of siblings and/or spouses each passenger had on board, while the `Parch` columns shows the number of parents or children each passenger had onboard. Neither column has any missing values.\n",
+    "The `SibSp` column shows the number of siblings and/or spouses each passenger had on board, while the `Parch` column shows the number of parents or children each passenger had onboard. Neither column has any missing values.\n",
     "\n",
     "The distribution of values in both columns is skewed right, with the majority of values being zero.\n",
     "\n",
-    "You can sum these two columns to explore the total number of family members each passenger had onboard.  The shape of the distribution of values in this case is similar, however there are less values at zero, and the quantity tapers off less rapidly as the values increase.\n",
+    "You can sum these two columns to explore the total number of family members each passenger had onboard. The shape of the distribution of values in this case is similar; however, there are fewer values at zero, and the quantity tapers off less rapidly as the values increase.\n",
     "\n",
     "Looking at the survival rates of the the combined family members, you can see that few of the over 500 passengers with no family members survived, while greater numbers of passengers with family members survived."
    ]
@@ -686,7 +686,7 @@
     "    all_y = df[\"Survived\"]\n",
     "\n",
     "    # List of dictionaries, each containing a model name,\n",
-    "    # it's estimator and a dict of hyperparameters\n",
+    "    # its estimator and a dict of hyperparameters\n",
     "    models = [\n",
     "        {\n",
     "            \"name\": \"LogisticRegression\",\n",
@@ -788,7 +788,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.8.2"
+   "version": "3.8.5"
   }
  },
  "nbformat": 4,

+ 6 - 6
Mission191Solutions.ipynb

@@ -263,7 +263,7 @@
     "- Slim Jim Bites (Blues)\n",
     "- Meteor and the Girls (Pop)\n",
     "\n",
-    "It's worth keeping in mind that combined, these three genres only make up only 17% of total sales, so we should be on the lookout for artists and albums from the 'rock' genre, which accounts for 53% of sales."
+    "It's worth keeping in mind that combined, these three genres only make up only 17% of total sales, so we should be on the lookout for artists and albums from the rock genre, which accounts for 53% of sales."
    ]
   },
   {
@@ -349,7 +349,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "While there is a 20% difference in sales between Jane (the top employee) and Steve (the bottom employee), the difference roughly corresponds with the differences in their hiring dates."
+    "While there is a 20% difference in sales between Jane (the top employee) and Steve (the bottom employee), the difference roughly corresponds to the differences in their hiring dates."
    ]
   },
   {
@@ -527,14 +527,14 @@
     "- United Kingdom\n",
     "- India\n",
     "\n",
-    "It's worth keeping in mind that because the amount of data from each of these countries is relatively low.  Because of this, we should be cautious spending too much money on new marketing campaigns, as the sample size is not large enough to give us high confidence.  A better approach would be to run small campaigns in these countries, collecting and analyzing the new customers to make sure that these trends hold with new customers."
+    "It's worth remembering this because the amount of data from each of these countries is relatively low. As such, we should be cautious about spending too much money on new marketing campaigns because the sample size isn't large enough to give us high confidence. A better approach would be to run small campaigns in these countries, collecting and analyzing the new customers to make sure that these trends hold with new customers."
    ]
   },
   {
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Albums vs Individual Tracks"
+    "## Albums vs. Individual Tracks"
    ]
   },
   {
@@ -640,7 +640,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "Album purchases account for 18.6% of purchases.  Based on this data, I would recommend against purchasing only select tracks from albums from record companies, since there is potential to lose one fifth of revenue."
+    "Album purchases account for 18.6% of purchases. Based on this data, I would recommend against purchasing only select tracks from albums from record companies, since there is potential to lose one fifth of revenue."
    ]
   },
   {
@@ -669,7 +669,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.6.8"
+   "version": "3.8.5"
   }
  },
  "nbformat": 4,

+ 30 - 30
Mission193Solutions.ipynb

@@ -2617,15 +2617,15 @@
    "source": [
     "It looks like the game log has a record of over 170,000 games.  It looks like these games are chronologically ordered and occur between 1871 and 2016.\n",
     "\n",
-    "For each game we have:\n",
+    "For each game we have the following:\n",
     "\n",
-    "- general information on the game\n",
-    "- team level stats for each team\n",
-    "- a list of players from each team, numbered, with their defensive positions\n",
-    "- the umpires that officiated the game\n",
-    "- some 'awards', like winning and losing pitcher\n",
+    "- General information on the game\n",
+    "- Team level stats for each team\n",
+    "- A list of players from each team, numbered, with their defensive positions\n",
+    "- The umpires who officiated the game\n",
+    "- Some awards, like winning and losing pitcher\n",
     "\n",
-    "We have a `game_log_fields.txt` file that tell us that the player number corresponds with the order in which they batted.\n",
+    "We have a `game_log_fields.txt` file that tells us that the player number corresponds to the order in which they batted.\n",
     "\n",
     "It's worth noting that there is no natural primary key column for this table."
    ]
@@ -2742,9 +2742,9 @@
     "hidden": true
    },
    "source": [
-    "This seems to be a list of people with IDs.  The IDs look like they match up with those used in the game log.  There are debut dates, for players, managers, coaches and umpires.  We can see that some people might have been one or more of these roles.\n",
+    "This seems to be a list of people with IDs. The IDs look like they match up with those used in the game log. There are debut dates for players, managers, coaches, and umpires. We can see that some people might have played one or more of these roles.\n",
     "\n",
-    "It also looks like coaches and managers are two different things in baseball.  After some research, managers are what would be called a 'coach' or 'head coach' in other sports, and coaches are more specialized, like base coaches.  It also seems like coaches aren't recorded in the game log."
+    "It also looks like coaches and managers are two different things in baseball. After some research, managers are what we would called a *coach* or *head coach* in other sports, and coaches are more specialized, like base coaches.  It also seems that coaches aren't recorded in the game log."
    ]
   },
   {
@@ -2885,7 +2885,7 @@
     "hidden": true
    },
    "source": [
-    "This seems to be a list of all baseball parks.  There are IDs which seem to match with the game log, as well as names, nicknames, city and league."
+    "This seems to be a list of all baseball parks.  There are IDs that seem to match with the game log, as well as names, nicknames, city, and league."
    ]
   },
   {
@@ -3006,7 +3006,7 @@
     "hidden": true
    },
    "source": [
-    "This seems to be a list of all teams, with team_ids which seem to match the game log. Interestingly, there is a `franch_id`, let's take a look at this:"
+    "This seems to be a list of all teams, with team_ids that seem to match the game log. Interestingly, there is a `franch_id`, let's take a look at this:"
    ]
   },
   {
@@ -3042,7 +3042,7 @@
     "hidden": true
    },
    "source": [
-    "We might have `franch_id` occurring a few times for some teams, let's look at the first one in more detail."
+    "We might have `franch_id` occurring a few times for some teams. Let's look at the first one in more detail."
    ]
   },
   {
@@ -3142,7 +3142,7 @@
     "hidden": true
    },
    "source": [
-    "It appears that teams move between leagues and cities.  The team_id changes when this happens, `franch_id` (which is probably 'Franchise') helps us tie all of this together."
+    "It appears that teams move between leagues and cities.  The team_id changes when this happens, `franch_id` (which is probably *Franchise*) helps us tie all of this together."
    ]
   },
   {
@@ -3153,7 +3153,7 @@
    "source": [
     "**Defensive Positions**\n",
     "\n",
-    "In the game log, each player has a defensive position listed, which seems to be a number between 1-10.  Doing some research around this, I found [this article](http://probaseballinsider.com/baseball-instruction/baseball-basics/baseball-basics-positions/) which gives us a list of names for each numbered position:\n",
+    "In the game log, each player has a defensive position listed, which seems to be a number between 1-10. Doing some research, we find [this article](http://probaseballinsider.com/baseball-instruction/baseball-basics/baseball-basics-positions/), which gives us a list of names for each numbered position:\n",
     "\n",
     "1. Pitcher\n",
     "2. Catcher\n",
@@ -3165,11 +3165,11 @@
     "8. Center Field\n",
     "9. Right Field\n",
     "\n",
-    "The 10th position isn't included, it may be a way of describing a designated hitter that does not field.  I can find a retrosheet page that indicates that position `0` is used for this, but we don't have any position 0 in our data.  I have chosen to make this an 'Unknown Position' so I'm not including data based on a hunch.\n",
+    "The 10th position isn't included. It may be a way of describing a designated hitter that does not field. We can find a retrosheet page that indicates that position `0` is used for this, but we don't have any position 0 in our data. We have chosen to make this an *Unknown Position*, so we're not including data based on a hunch.\n",
     "\n",
     "**Leagues**\n",
     "\n",
-    "Wikipedia tells us there are currently two leagues - the American (AL) and National (NL). Let's start by finding out what leagues are listed in the main game log:"
+    "Wikipedia tells us there are currently two leagues — the American (AL) and National (NL). Let's start by determining which leagues are listed in the main game log:"
    ]
   },
   {
@@ -3206,7 +3206,7 @@
     "hidden": true
    },
    "source": [
-    "It looks like most of our games fall into the two current leagues, but that there are four other leagues.  Let's write a quick function to get some info on the years of these leagues:"
+    "It looks like most of our games fall into the two current leagues, but there are four other leagues. Let's write a quick function to get some info on the years of these leagues:"
    ]
   },
   {
@@ -3247,7 +3247,7 @@
     "hidden": true
    },
    "source": [
-    "Now we have some years which will help us do some research.  After some googling we come up with:\n",
+    "Now we have some years, which will help us do some research. After some googling we come up with this list:\n",
     "\n",
     "- `NL`: National League\n",
     "- `AL`: American League\n",
@@ -3256,7 +3256,7 @@
     "- `PL`: [Players League](https://en.wikipedia.org/wiki/Players%27_League)\n",
     "- `UA`: [Union Association](https://en.wikipedia.org/wiki/Union_Association)\n",
     "\n",
-    "It also looks like we have about 1000 games where the home team doesn't have a value for league."
+    "It also looks like we have about 1,000 games where the home team doesn't have a value for league."
    ]
   },
   {
@@ -3506,13 +3506,13 @@
     "The following are opportunities for normalization of our data:\n",
     "\n",
     "- In `person_codes`, all the debut dates will be able to be reproduced using game log data.\n",
-    "- In `team_codes`, the start, end and sequence columns will be able to be reproduced using game log data.\n",
-    "- In `park_codes`, the start and end years will be able to be reproduced using game log data.  While technically the state is an attribute of the city, we might not want to have a an incomplete city/state table so we will leave this in.\n",
-    "- There are lots of places in `game` log where we have a player ID followed by the players name.  We will be able to remove this and use the name data in `person_codes`\n",
-    "- In `game_log`, all offensive and defensive stats are repeated for the home team and the visiting team.  We could break these out and have a table that lists each game twice, one for each team, and cut out this column repetition.\n",
-    "- Similarly, in `game_log`, we have a listing for 9 players on each team with their positions - we can remove these and have one table that tracks player appearances and their positions.\n",
-    "- We can do a similar thing with the umpires from `game_log`, instead of listing all four positions as columns, we can put the umpires either in their own table or make one table for players, umpires and managers.\n",
-    "- We have several awards in `game_log` like winning pitcher and losing pitcher.  We can either break these out into their own table, have a table for awards, or combine the awards in with general appearances like the players and umpires."
+    "- In `team_codes`, the start, end, and sequence columns will be able to be reproduced using game log data.\n",
+    "- In `park_codes`, the start and end years will be able to be reproduced using game log data. While technically the state is an attribute of the city, we might not want to have a an incomplete city/state table, so we will leave this in.\n",
+    "- There are many places in `game` log where we have a player ID followed by the players name. We will be able to remove this and use the name data in `person_codes`.\n",
+    "- In `game_log`, all offensive and defensive stats are repeated for the home team and the visiting team. We could break these out and have a table that lists each game twice, one for each team, and cut out this column repetition.\n",
+    "- Similarly, in `game_log`, we have a listing for 9 players on each team with their positions  we can remove these and have one table that tracks player appearances and their positions.\n",
+    "- We can do a similar thing with the umpires from `game_log`. Instead of listing all four positions as columns, we can put the umpires either in their own table or make one table for players, umpires, and managers.\n",
+    "- We have several awards in `game_log`, like winning pitcher and losing pitcher. We can either break these out into their own table, have a table for awards, or combine the awards in with general appearances like the players and umpires."
    ]
   },
   {
@@ -4148,7 +4148,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "## Adding The Team and Game Tables"
+    "## Adding the Team and Game Tables"
    ]
   },
   {
@@ -5945,9 +5945,9 @@
  ],
  "metadata": {
   "kernelspec": {
-   "display_name": "dscontent",
+   "display_name": "Python 3",
    "language": "python",
-   "name": "dscontent"
+   "name": "python3"
   },
   "language_info": {
    "codemirror_mode": {
@@ -5959,7 +5959,7 @@
    "name": "python",
    "nbconvert_exporter": "python",
    "pygments_lexer": "ipython3",
-   "version": "3.4.4"
+   "version": "3.8.5"
   },
   "notify_time": "5"
  },