Queer European MD passionate about IT

Mission449Solutions.Rmd 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421
  1. ---
  2. title: 'Guided Project: Finding the Best Markets to Advertise In'
  3. author: "Dataquest"
  4. date: "11/19/2019"
  5. output: html_document
  6. ---
  7. # Finding the Two Best Markets to Advertise in an E-learning Product
  8. In this project, we'll aim to find the two best markets to advertise our product in — we're working for an e-learning company that offers courses on programming. Most of our courses are on web and mobile development, but we also cover many other domains, like data science, game development, etc.
  9. # Understanding the Data
  10. To avoid spending money on organizing a survey, we'll first try to make use of existing data to determine whether we can reach any reliable result.
  11. One good candidate for our purpose is [freeCodeCamp's 2017 New Coder Survey](https://medium.freecodecamp.org/we-asked-20-000-people-who-they-are-and-how-theyre-learning-to-code-fff5d668969). [freeCodeCamp](https://www.freecodecamp.org/) is a free e-learning platform that offers courses on web development. Because they run [a popular Medium publication](https://medium.freecodecamp.org/) (over 400,000 followers), their survey attracted new coders with varying interests (not only web development), which is ideal for the purpose of our analysis.
  12. The survey data is publicly available in [this GitHub repository](https://github.com/freeCodeCamp/2017-new-coder-survey). Below, we'll do a quick exploration of the `2017-fCC-New-Coders-Survey-Data.csv` file stored in the `clean-data` folder of the repository we just mentioned. We'll read in the file using the direct link [here](https://raw.githubusercontent.com/freeCodeCamp/2017-new-coder-survey/master/clean-data/2017-fCC-New-Coders-Survey-Data.csv).
  13. ```{r}
  14. library(readr)
  15. fcc <- read_csv("2017-fCC-New-Coders-Survey-Data.csv")
  16. dim(fcc)
  17. head(fcc, 5)
  18. ```
  19. # Checking for Sample Representativity
  20. As we mentioned in the introduction, most of our courses are on web and mobile development, but we also cover many other domains, like data science, game development, etc. For the purpose of our analysis, we want to answer questions about a population of new coders that are interested in the subjects we teach. We'd like to know:
  21. * Where are these new coders located.
  22. * What locations have the greatest densities of new coders.
  23. * How much money they're willing to spend on learning.
  24. So we first need to clarify whether the data set has the right categories of people for our purpose. The `JobRoleInterest` column describes for every participant the role(s) they'd be interested in working in. If a participant is interested in working in a certain domain, it means that they're also interested in learning about that domain. So let's take a look at the frequency distribution table of this column [^1] and determine whether the data we have is relevant.
  25. ```{r}
  26. #split-and-combine workflow
  27. library(dplyr)
  28. fcc %>%
  29. group_by(JobRoleInterest) %>%
  30. summarise(freq = n()*100/nrow(fcc)) %>%
  31. arrange(desc(freq))
  32. ```
  33. The information in the table above is quite granular, but from a quick scan it looks like:
  34. * A lot of people are interested in web development (full-stack _web development_, front-end _web development_ and back-end _web development_).
  35. * A few people are interested in mobile development.
  36. * A few people are interested in domains other than web and mobile development.
  37. It's also interesting to note that many respondents are interested in more than one subject. It'd be useful to get a better picture of how many people are interested in a single subject and how many have mixed interests. Consequently, in the next code block, we'll:
  38. - Split each string in the `JobRoleInterest` column to find the number of options for each participant.
  39. - We'll first drop the NA values [^2] because we cannot split NA values.
  40. - Generate a frequency table for the variable describing the number of options [^3].
  41. ```{r}
  42. # Split each string in the 'JobRoleInterest' column
  43. splitted_interests <- fcc %>%
  44. select(JobRoleInterest) %>%
  45. tidyr::drop_na() %>%
  46. rowwise %>% #Tidyverse actually makes by default operation over columns, rowwise changes this behavior.
  47. mutate(opts = length(stringr::str_split(JobRoleInterest, ",")[[1]]))
  48. # Frequency table for the var describing the number of options
  49. n_of_options <- splitted_interests %>%
  50. ungroup() %>% #this is needeed because we used the rowwise() function before
  51. group_by(opts) %>%
  52. summarize(freq = n()*100/nrow(splitted_interests))
  53. n_of_options
  54. ```
  55. It turns out that only 31.65% of the participants have a clear idea about what programming niche they'd like to work in, while the vast majority of students have mixed interests. But given that we offer courses on various subjects, the fact that new coders have mixed interest might be actually good for us.
  56. The focus of our courses is on web and mobile development, so let's find out how many respondents chose at least one of these two options.
  57. ```{r}
  58. # Frequency table (we can also use split-and-combine)
  59. web_or_mobile <- stringr::str_detect(fcc$JobRoleInterest, "Web Developer|Mobile Developer")
  60. freq_table <- table(web_or_mobile)
  61. freq_table <- freq_table * 100 / sum(freq_table)
  62. freq_table
  63. # Graph for the frequency table above
  64. df <- tibble::tibble(x = c("Other Subject","Web or Mobile Developpement"),
  65. y = freq_table)
  66. library(ggplot2)
  67. ggplot(data = df, aes(x = x, y = y, fill = x)) +
  68. geom_histogram(stat = "identity")
  69. ```
  70. It turns out that most people in this survey (roughly 86%) are interested in either web or mobile development. These figures offer us a strong reason to consider this sample representative for our population of interest. We want to advertise our courses to people interested in all sorts of programming niches but mostly web and mobile development.
  71. Now we need to figure out what are the best markets to invest money in for advertising our courses. We'd like to know:
  72. * Where are these new coders located.
  73. * What are the locations with the greatest number of new coders.
  74. * How much money new coders are willing to spend on learning.
  75. # New Coders - Locations and Densities
  76. Let's begin with finding out where these new coders are located, and what are the densities (how many new coders there are) for each location. This should be a good start for finding out the best two markets to run our ads campaign in.
  77. The data set provides information about the location of each participant at a country level. We can think of each country as an individual market, so we can frame our goal as finding the two best countries to advertise in.
  78. We can start by examining the frequency distribution table of the `CountryLive` variable, which describes what country each participant lives in (not their origin country). We'll only consider those participants who answered what role(s) they're interested in, to make sure we work with a representative sample.
  79. ```{r}
  80. # Isolate the participants that answered what role they'd be interested in
  81. fcc_good <- fcc %>%
  82. tidyr::drop_na(JobRoleInterest)
  83. # Frequency tables with absolute and relative frequencies
  84. # Display the frequency tables in a more readable format
  85. fcc_good %>%
  86. group_by(CountryLive) %>%
  87. summarise(`Absolute frequency` = n(),
  88. `Percentage` = n() * 100 / nrow(fcc_good) ) %>%
  89. arrange(desc(Percentage))
  90. ```
  91. 44.69% of our potential customers are located in the US, and this definitely seems like the most interesting market. India has the second customer density, but it's just 7.55%, which is not too far from the United Kingdom (4.50%) or Canada (3.71%).
  92. This is useful information, but we need to go more in depth than this and figure out how much money people are actually willing to spend on learning. Advertising in high-density markets where most people are only willing to learn for free is extremely unlikely to be profitable for us.
  93. # Spending Money for Learning
  94. The `MoneyForLearning` column describes in American dollars the amount of money spent by participants from the moment they started coding until the moment they completed the survey. Our company sells subscriptions at a price of \$59 per month, and for this reason we're interested in finding out how much money each student spends per month.
  95. We'll narrow down our analysis to only four countries: the US, India, the United Kingdom, and Canada. We do this for two reasons:
  96. * These are the countries having the highest frequency in the frequency table above, which means we have a decent amount of data for each.
  97. * Our courses are written in English, and English is an official language in all these four countries. The more people know English, the better our chances to target the right people with our ads.
  98. Let's start with creating a new column that describes the amount of money a student has spent per month so far. To do that, we'll need to divide the `MoneyForLearning` column to the `MonthsProgramming` column. The problem is that some students answered that they have been learning to code for 0 months (it might be that they have just started). To avoid dividing by 0, we'll replace 0 with 1 in the `MonthsProgramming` column.
  99. ```{r}
  100. # Replace 0s with 1s to avoid division by 0
  101. fcc_good <- fcc_good %>%
  102. mutate(MonthsProgramming = replace(MonthsProgramming, MonthsProgramming == 0, 1) )
  103. # New column for the amount of money each student spends each month
  104. fcc_good <- fcc_good %>%
  105. mutate(money_per_month = MoneyForLearning/MonthsProgramming)
  106. fcc_good %>%
  107. summarise(na_count = sum(is.na(money_per_month)) ) %>%
  108. pull(na_count)
  109. ```
  110. Let's keep only the rows that don't have NA values for the `money_per_month` column.
  111. ```{r}
  112. # Keep only the rows with non-NAs in the `money_per_month` column
  113. fcc_good <- fcc_good %>% tidyr::drop_na(money_per_month)
  114. ```
  115. We want to group the data by country, and then measure the average amount of money that students spend per month in each country. First, let's remove the rows having `NA` values for the `CountryLive` column, and check out if we still have enough data for the four countries that interest us.
  116. ```{r}
  117. # Remove the rows with NA values in 'CountryLive'
  118. fcc_good <- fcc_good %>% tidyr::drop_na(CountryLive)
  119. # Frequency table to check if we still have enough data
  120. fcc_good %>% group_by(CountryLive) %>%
  121. summarise(freq = n() ) %>%
  122. arrange(desc(freq)) %>%
  123. head()
  124. ```
  125. This should be enough, so let's compute the average value spent per month in each country by a student. We'll compute the average using the mean.
  126. ```{r}
  127. # Mean sum of money spent by students each month
  128. countries_mean <- fcc_good %>%
  129. filter(CountryLive == 'United States of America' | CountryLive == 'India' | CountryLive == 'United Kingdom'|CountryLive == 'Canada') %>%
  130. group_by(CountryLive) %>%
  131. summarize(mean = mean(money_per_month)) %>%
  132. arrange(desc(mean))
  133. countries_mean
  134. ```
  135. The results for the United Kingdom and Canada are a bit surprising relative to the values we see for India. If we considered a few socio-economical metrics (like [GDP per capita](https://bit.ly/2I3cukh)), we'd intuitively expect people in the UK and Canada to spend more on learning than people in India.
  136. It might be that we don't have have enough representative data for the United Kingdom and Canada, or we have some outliers (maybe coming from wrong survey answers) making the mean too large for India, or too low for the UK and Canada. Or it might be that the results are correct.
  137. # Dealing with Extreme Outliers
  138. Let's use box plots to visualize the distribution of the `money_per_month` variable for each country.
  139. ```{r}
  140. # Isolate only the countries of interest
  141. only_4 <- fcc_good %>%
  142. filter(CountryLive == 'United States of America' | CountryLive == 'India' | CountryLive == 'United Kingdom'|CountryLive == 'Canada')
  143. # Since maybe, we will remove elements from the database,
  144. # we add an index column containing the number of each row.
  145. # Hence, we will have a match with the original database in case of some indexes.
  146. only_4 <- only_4 %>%
  147. mutate(index = row_number())
  148. # Box plots to visualize distributions
  149. ggplot( data = only_4, aes(x = CountryLive, y = money_per_month)) +
  150. geom_boxplot() +
  151. ggtitle("Money Spent Per Month Per Country\n(Distributions)") +
  152. xlab("Country") +
  153. ylab("Money per month (US dollars)") +
  154. theme_bw()
  155. ```
  156. It's hard to see on the plot above if there's anything wrong with the data for the United Kingdom, India, or Canada, but we can see immediately that there's something really off for the US: two persons spend each month \$50,000 or more for learning. This is not impossible, but it seems extremely unlikely, so we'll remove every value that goes over \$20,000 per month.
  157. ```{r}
  158. # Isolate only those participants who spend less than 10,000 per month
  159. fcc_good <- fcc_good %>%
  160. filter(money_per_month < 20000)
  161. ```
  162. Now let's recompute the mean values and plot the box plots again.
  163. ```{r}
  164. # Mean sum of money spent by students each month
  165. countries_mean = fcc_good %>%
  166. filter(CountryLive == 'United States of America' | CountryLive == 'India' | CountryLive == 'United Kingdom'|CountryLive == 'Canada') %>%
  167. group_by(CountryLive) %>%
  168. summarize(mean = mean(money_per_month)) %>%
  169. arrange(desc(mean))
  170. countries_mean
  171. ```
  172. ```{r}
  173. # Isolate only the countries of interest
  174. only_4 <- fcc_good %>%
  175. filter(CountryLive == 'United States of America' | CountryLive == 'India' | CountryLive == 'United Kingdom'|CountryLive == 'Canada') %>%
  176. mutate(index = row_number())
  177. # Box plots to visualize distributions
  178. ggplot( data = only_4, aes(x = CountryLive, y = money_per_month)) +
  179. geom_boxplot() +
  180. ggtitle("Money Spent Per Month Per Country\n(Distributions)") +
  181. xlab("Country") +
  182. ylab("Money per month (US dollars)") +
  183. theme_bw()
  184. ```
  185. We can see a few extreme outliers for India (values over \$2,500 per month), but it's unclear whether this is good data or not. Maybe these persons attended several bootcamps, which tend to be very expensive. Let's examine these two data points to see if we can find anything relevant.
  186. ```{r}
  187. # Inspect the extreme outliers for India
  188. india_outliers <- only_4 %>%
  189. filter(CountryLive == 'India' &
  190. money_per_month >= 2500)
  191. india_outliers
  192. ```
  193. It seems that neither participant attended a bootcamp. Overall, it's really hard to figure out from the data whether these persons really spent that much money with learning. The actual question of the survey was _"Aside from university tuition, about how much money have you spent on learning to code so far (in US dollars)?"_, so they might have misunderstood and thought university tuition is included. It seems safer to remove these six rows.
  194. ```{r}
  195. # Remove the outliers for India
  196. only_4 <- only_4 %>%
  197. filter(!(index %in% india_outliers$index))
  198. ```
  199. Looking back at the box plot above, we can also see more extreme outliers for the US (values over \$6,000 per month). Let's examine these participants in more detail.
  200. ```{r}
  201. # Examine the extreme outliers for the US
  202. us_outliers = only_4 %>%
  203. filter(CountryLive == 'United States of America' &
  204. money_per_month >= 6000)
  205. us_outliers
  206. only_4 <- only_4 %>%
  207. filter(!(index %in% us_outliers$index))
  208. ```
  209. Out of these 11 extreme outliers, six people attended bootcamps, which justify the large sums of money spent on learning. For the other five, it's hard to figure out from the data where they could have spent that much money on learning. Consequently, we'll remove those rows where participants reported thay they spend \$6,000 each month, but they have never attended a bootcamp.
  210. Also, the data shows that eight respondents had been programming for no more than three months when they completed the survey. They most likely paid a large sum of money for a bootcamp that was going to last for several months, so the amount of money spent per month is unrealistic and should be significantly lower (because they probably didn't spend anything for the next couple of months after the survey). As a consequence, we'll remove every these eight outliers.
  211. In the next code block, we'll remove respondents that:
  212. - Didn't attend bootcamps.
  213. - Had been programming for three months or less when at the time they completed the survey.
  214. ```{r}
  215. # Remove the respondents who didn't attendent a bootcamp
  216. no_bootcamp = only_4 %>%
  217. filter(CountryLive == 'United States of America' &
  218. money_per_month >= 6000 &
  219. AttendedBootcamp == 0)
  220. only_4_ <- only_4 %>%
  221. filter(!(index %in% no_bootcamp$index))
  222. # Remove the respondents that had been programming for less than 3 months
  223. less_than_3_months = only_4 %>%
  224. filter(CountryLive == 'United States of America' &
  225. money_per_month >= 6000 &
  226. MonthsProgramming <= 3)
  227. only_4 <- only_4 %>%
  228. filter(!(index %in% less_than_3_months$index))
  229. ```
  230. Looking again at the last box plot above, we can also see an extreme outlier for Canada — a person who spends roughly \$5,000 per month. Let's examine this person in more depth.
  231. ```{r}
  232. # Examine the extreme outliers for Canada
  233. canada_outliers = only_4 %>%
  234. filter(CountryLive == 'Canada' &
  235. money_per_month >= 4500 &
  236. MonthsProgramming <= 3)
  237. canada_outliers
  238. ```
  239. Here, the situation is similar to some of the US respondents — this participant had been programming for no more than two months when he completed the survey. He seems to have paid a large sum of money in the beginning to enroll in a bootcamp, and then he probably didn't spend anything for the next couple of months after the survey. We'll take the same approach here as for the US and remove this outlier.
  240. ```{r}
  241. # Remove the extreme outliers for Canada
  242. only_4 <- only_4 %>%
  243. filter(!(index %in% canada_outliers$index))
  244. ```
  245. Let's recompute the mean values and generate the final box plots.
  246. ```{r}
  247. # Mean sum of money spent by students each month
  248. countries_mean = only_4 %>%
  249. group_by(CountryLive) %>%
  250. summarize(mean = mean(money_per_month)) %>%
  251. arrange(desc(mean))
  252. countries_mean
  253. ```
  254. ```{r}
  255. # Box plots to visualize distributions
  256. ggplot( data = only_4, aes(x = CountryLive, y = money_per_month)) +
  257. geom_boxplot() +
  258. ggtitle("Money Spent Per Month Per Country\n(Distributions)") +
  259. xlab("Country") +
  260. ylab("Money per month (US dollars)") +
  261. theme_bw()
  262. ```
  263. ## Choosing the Two Best Markets
  264. Obviously, one country we should advertise in is the US. Lots of new coders live there and they are willing to pay a good amount of money each month (roughly \$143).
  265. We sell subscriptions at a price of \$59 per month, and Canada seems to be the best second choice because people there are willing to pay roughly \$93 per month, compared to India (\$66) and the United Kingdom (\$45).
  266. The data suggests strongly that we shouldn't advertise in the UK, but let's take a second look at India before deciding to choose Canada as our second best choice:
  267. * $59 doesn't seem like an expensive sum for people in India since they spend on average \$66 each month.
  268. * We have almost twice as more potential customers in India than we have in Canada:
  269. ```{r}
  270. # Frequency table for the 'CountryLive' column
  271. only_4 %>% group_by(CountryLive) %>%
  272. summarise(freq = n() * 100 / nrow(only_4) ) %>%
  273. arrange(desc(freq)) %>%
  274. head()
  275. ```
  276. ```{r}
  277. # Frequency table to check if we still have enough data
  278. only_4 %>% group_by(CountryLive) %>%
  279. summarise(freq = n() ) %>%
  280. arrange(desc(freq)) %>%
  281. head()
  282. ```
  283. So it's not crystal clear what to choose between Canada and India. Although it seems more tempting to choose Canada, there are good chances that India might actually be a better choice because of the large number of potential customers.
  284. At this point, it seems that we have several options:
  285. 1. Advertise in the US, India, and Canada by splitting the advertisement budget in various combinations:
  286. - 60% for the US, 25% for India, 15% for Canada.
  287. - 50% for the US, 30% for India, 20% for Canada; etc.
  288. 2. Advertise only in the US and India, or the US and Canada. Again, it makes sense to split the advertisement budget unequally. For instance:
  289. - 70% for the US, and 30% for India.
  290. - 65% for the US, and 35% for Canada; etc.
  291. 3. Advertise only in the US.
  292. At this point, it's probably best to send our analysis to the marketing team and let them use their domain knowledge to decide. They might want to do some extra surveys in India and Canada and then get back to us for analyzing the new survey data.
  293. # Conclusion
  294. In this project, we analyzed survey data from new coders to find the best two markets to advertise in. The only solid conclusion we reached is that the US would be a good market to advertise in.
  295. For the second best market, it wasn't clear-cut what to choose between India and Canada. We decided to send the results to the marketing team so they can use their domain knowledge to take the best decision.
  296. # Documentation
  297. [^1]: We can use the [Split-and-Combine workflow](https://app.dataquest.io/m/339/a/5).
  298. [^2]: We can use the [`drop_na()` function](https://app.dataquest.io/m/326/a/6).
  299. [^3]: We can use the [`stringr::str_split()` function](https://app.dataquest.io/m/342/a/6).