Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

Jupyter notebook Homework 1/HW 1.5 - Advanced NumPy.ipynb

339 views
Kernel: Python 3
As a reminder, one of the suggested prerequisites for this course is programming experience, especially in Python. If you do not have experience in Python, we strongly recommend you go through the Codecademy Python course as soon as possible to brush up on the basics of Python.
Before going through this notebook, you may want to take a quick look at [this optional Debugging notebook](Optional 1.1 - Debugging.ipynb) for some tips on debugging your code when you get stuck.

Sometimes, there are more advanced operations we want to do with NumPy arrays. For example, if we had an array of values and wanted to set all negative values to zero, how would we do this? The answer is called fancy indexing, and be done two ways: boolean indexing, and array indexing.

import numpy as np

Boolean indexing

The idea behind boolean indexing is that for each element of the array, we know whether we want to select it or not. A boolean array is an array of the same shape as our original array which contains only True and False values. The location of the True values in our boolean array indicate the location of the element in our original array that we want to select, while the location of the False values correspond to those elements in our original array that we don't want to select.

Let's consider our experiment data again:

data = np.load("data/experiment_data.npy") data
array([[ 1668.07869346, 774.38921876, 3161.14983152, ..., 2359.05394666, 784.36404676, 448.33416341], [ 2419.38185232, 809.18389145, 2766.62648929, ..., 1159.47379735, 1330.44887992, 1842.3268586 ], [ 2221.02887591, 1496.00517071, 354.95889145, ..., 1355.74575912, 1205.29137942, 1385.71283365], ..., [ 1654.50469248, 518.3271927 , 5127.58599224, ..., 2544.1042064 , 624.07607332, 1029.57386246], [ 480.68016502, 4690.12200498, 1520.27397139, ..., 1000.40541618, 988.73647145, 378.43452948], [ 1823.42891807, 3680.12951133, 3522.94413167, ..., 591.4133153 , 383.26367525, 1768.50528483]])

Recall that these are reaction times. It is typically accepted that really low reaction times -- such as less than 100 milliseconds -- are too fast for people to have actually seen and processed the stimulus. Let's see if there are any reaction times less than 100 milliseconds in our data.

To pull out just the elements less than 100 milliseconds, we need two steps. First, we use boolean comparisons to check which are less than 100ms:

too_fast = data < 100 too_fast
array([[False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False], ..., [False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False], [False, False, False, ..., False, False, False]], dtype=bool)

Then, using this too_fast array, we can index back into the original array, and see that there are indeed some trials which were abnormally fast:

data[too_fast]
array([ 86.28125135, 76.63231393, 68.72526177, 77.25801031, 97.065495 , 92.13792056, 90.05066503, 86.59892207, 96.45674184, 90.79293103, 81.97898954, 47.59226041, 98.80537434])

What this is doing is essentially saying: for every element in too_fast that is True, give me the corresponding element in arr.

Bcause this is a boolean array, we can also negate it, and pull out all the elements that we consider to be valid reaction times:

data[~too_fast]
array([ 1668.07869346, 774.38921876, 3161.14983152, ..., 591.4133153 , 383.26367525, 1768.50528483])

Not only does this give you the elements, but modifying those elements will modify the original array, too. In this case, we will set our "too fast" elements to have a value of "not a number", or NaN:

data[too_fast] = np.nan data
array([[ 1668.07869346, 774.38921876, 3161.14983152, ..., 2359.05394666, 784.36404676, 448.33416341], [ 2419.38185232, 809.18389145, 2766.62648929, ..., 1159.47379735, 1330.44887992, 1842.3268586 ], [ 2221.02887591, 1496.00517071, 354.95889145, ..., 1355.74575912, 1205.29137942, 1385.71283365], ..., [ 1654.50469248, 518.3271927 , 5127.58599224, ..., 2544.1042064 , 624.07607332, 1029.57386246], [ 480.68016502, 4690.12200498, 1520.27397139, ..., 1000.40541618, 988.73647145, 378.43452948], [ 1823.42891807, 3680.12951133, 3522.94413167, ..., 591.4133153 , 383.26367525, 1768.50528483]])

Now, if we try to find which elements are less than 100 milliseconds, we will not find any:

data[data < 100]
/usr/local/lib/python3.4/dist-packages/ipykernel/__main__.py:1: RuntimeWarning: invalid value encountered in less if __name__ == '__main__':
array([], dtype=float64)
Note: You may see a RuntimeWarning when you run the above cell, saying that an "invalid value" was encountered. Sometimes, it is possible for NaNs to appear in an array without your knowledge: for example, if you multiply infinity (np.inf) by zero. So, NumPy is warning us that it has incountered NaNs (the "invalid value") in case we weren't aware. We knew there were NaNs because we put them there, so in this scenario we can safely ignore the warning. However, if you encounter a warning like this in the future and you weren't expecting it, make sure you investigate the source of the warning!

Exercise: Threshold (2 points)

Write a function, threshold, which takes an array and returns a new array with values thresholded by the mean of the array.
def threshold(arr): """Computes the mean of the given array, and returns a new array which is 1 where values in the original array are greater than the mean, 0 where they are equal to the mean, and -1 where they are less than the mean. Remember that if you want to create a copy of an array, you need to use `arr.copy()`. Hint: your solution should use boolean indexing, and can be done in six lines of code (including the return statement). Parameters ---------- arr : numpy.ndarray Returns ------- new_arr : thresholded version of `arr` """ ### BEGIN SOLUTION below = arr < np.mean(arr) equal = arr == np.mean(arr) new_arr = np.ones(arr.shape) new_arr[below]=-1 new_arr[equal]=0 return new_arr ### END SOLUTION
# add your own test cases in this cell! %whos
Variable Type Data/Info --------------------------------- data ndarray 50x300: 15000 elems, type `float64`, 120000 bytes (117.1875 kb) np module <module 'numpy' from '/us<...>kages/numpy/__init__.py'> threshold function <function threshold at 0x7f9e18fec048> too_fast ndarray 50x300: 15000 elems, type `bool`, 15000 bytes
"""Try a few obvious threshold cases.""" from numpy.testing import assert_array_equal assert_array_equal(threshold(np.array([1, 1, 1, 1])), np.array([0, 0, 0, 0])) assert_array_equal(threshold(np.array([1, 0, 1, 0])), np.array([1, -1, 1, -1])) assert_array_equal(threshold(np.array([1, 0.5, 0, 0.5])), np.array([1, 0, -1, 0])) assert_array_equal( threshold(np.array([[0.5, 0.2, -0.3, 0.1], [1.7, -3.8, 0.5, 0.6]])), np.array([[1, 1, -1, 1], [1, -1, 1, 1]]))
"""Make sure a copy of the array is being returned, and that the original array is unmodified.""" x = np.array([[0.5, 0.2, -0.3, 0.1], [1.7, -3.8, 0.5, 0.6]]) y = threshold(x) assert_array_equal(x, np.array([[0.5, 0.2, -0.3, 0.1], [1.7, -3.8, 0.5, 0.6]])) assert_array_equal(y, np.array([[1, 1, -1, 1], [1, -1, 1, 1]]))

Array indexing

The other type of fancy indexing is array indexing. Let's consider our average response across participants:

data = np.load("data/experiment_data.npy") avg_responses = np.mean(data, axis=1) avg_responses
array([ 1698.68801725, 1888.71240023, 1796.53362098, 1879.6038851 , 1882.53249686, 1824.79568606, 1746.75780815, 1748.55448988, 1655.75347639, 1740.67757826, 1854.98538242, 1720.70259522, 1675.2006642 , 1746.52724187, 1768.64738486, 1794.45589925, 1860.06861469, 1835.73006077, 1520.77977686, 1795.55654863, 1794.26437533, 1716.73345285, 1740.64166499, 1704.87601852, 1906.06514665, 1722.68258855, 1857.70131135, 1878.26245376, 1741.26393398, 1680.21711839, 1830.55940979, 1697.03486501, 1892.45119973, 1888.69786047, 1653.73721041, 1794.17096019, 1779.9941148 , 1832.42610672, 1861.63504795, 1685.20108106, 1652.29647646, 1718.43799102, 1633.30628308, 1686.72435462, 1810.54490061, 1703.7949561 , 1747.64361845, 1670.90982655, 1830.47925898, 1771.15425183])

And let's say we also know which element corresponds to which participant, through the following participants array:

participants = np.load("data/experiment_participants.npy") participants
array(['p_045', 'p_039', 'p_027', 'p_023', 'p_041', 'p_008', 'p_025', 'p_019', 'p_036', 'p_049', 'p_050', 'p_029', 'p_032', 'p_006', 'p_028', 'p_034', 'p_044', 'p_016', 'p_010', 'p_017', 'p_022', 'p_033', 'p_042', 'p_009', 'p_047', 'p_035', 'p_002', 'p_014', 'p_020', 'p_043', 'p_003', 'p_012', 'p_030', 'p_015', 'p_011', 'p_018', 'p_004', 'p_040', 'p_001', 'p_031', 'p_005', 'p_013', 'p_046', 'p_038', 'p_021', 'p_026', 'p_024', 'p_048', 'p_007', 'p_037'], dtype='<U5')

In other words, the first element of avg_responses corresponds to the first element of participants (so participant 45), the second element of avg_responses was given by participant 39, and so on.

Let's say we wanted to know what participants had the largest average response, and what participants had the smallest average response. To do this, we might try sorting the responses:

np.sort(avg_responses)
array([ 1520.77977686, 1633.30628308, 1652.29647646, 1653.73721041, 1655.75347639, 1670.90982655, 1675.2006642 , 1680.21711839, 1685.20108106, 1686.72435462, 1697.03486501, 1698.68801725, 1703.7949561 , 1704.87601852, 1716.73345285, 1718.43799102, 1720.70259522, 1722.68258855, 1740.64166499, 1740.67757826, 1741.26393398, 1746.52724187, 1746.75780815, 1747.64361845, 1748.55448988, 1768.64738486, 1771.15425183, 1779.9941148 , 1794.17096019, 1794.26437533, 1794.45589925, 1795.55654863, 1796.53362098, 1810.54490061, 1824.79568606, 1830.47925898, 1830.55940979, 1832.42610672, 1835.73006077, 1854.98538242, 1857.70131135, 1860.06861469, 1861.63504795, 1878.26245376, 1879.6038851 , 1882.53249686, 1888.69786047, 1888.71240023, 1892.45119973, 1906.06514665])

However, we then don't know which responses correspond to which trials. A different way to do this would be to use np.argsort, which returns an array of indices corresponding to the sorted order of the elements, rather than the elements in sorted order:

np.argsort(avg_responses)
array([18, 42, 40, 34, 8, 47, 12, 29, 39, 43, 31, 0, 45, 23, 21, 41, 11, 25, 22, 9, 28, 13, 6, 46, 7, 14, 49, 36, 35, 20, 15, 19, 2, 44, 5, 48, 30, 37, 17, 10, 26, 16, 38, 27, 3, 4, 33, 1, 32, 24])

What this says is that element 18 is the smallest response, element 42 is the next smallest response, and so on, all the way to element 24, which is the largest response:

avg_responses[18]
1520.7797768567086
avg_responses[42]
1633.3062830758922
avg_responses[24]
1906.0651466520821

To use fancy indexing, we can actually use this array of integers as an index. If we use it on the original array, then we will obtain the sorted elements:

avg_responses[np.argsort(avg_responses)]
array([ 1520.77977686, 1633.30628308, 1652.29647646, 1653.73721041, 1655.75347639, 1670.90982655, 1675.2006642 , 1680.21711839, 1685.20108106, 1686.72435462, 1697.03486501, 1698.68801725, 1703.7949561 , 1704.87601852, 1716.73345285, 1718.43799102, 1720.70259522, 1722.68258855, 1740.64166499, 1740.67757826, 1741.26393398, 1746.52724187, 1746.75780815, 1747.64361845, 1748.55448988, 1768.64738486, 1771.15425183, 1779.9941148 , 1794.17096019, 1794.26437533, 1794.45589925, 1795.55654863, 1796.53362098, 1810.54490061, 1824.79568606, 1830.47925898, 1830.55940979, 1832.42610672, 1835.73006077, 1854.98538242, 1857.70131135, 1860.06861469, 1861.63504795, 1878.26245376, 1879.6038851 , 1882.53249686, 1888.69786047, 1888.71240023, 1892.45119973, 1906.06514665])

And if we use it on our array of participants, then we can determine what participants had the largest and smallest responses:

participants[np.argsort(avg_responses)]
array(['p_010', 'p_046', 'p_005', 'p_011', 'p_036', 'p_048', 'p_032', 'p_043', 'p_031', 'p_038', 'p_012', 'p_045', 'p_026', 'p_009', 'p_033', 'p_013', 'p_029', 'p_035', 'p_042', 'p_049', 'p_020', 'p_006', 'p_025', 'p_024', 'p_019', 'p_028', 'p_037', 'p_004', 'p_018', 'p_022', 'p_034', 'p_017', 'p_027', 'p_021', 'p_008', 'p_007', 'p_003', 'p_040', 'p_016', 'p_050', 'p_002', 'p_044', 'p_001', 'p_014', 'p_023', 'p_041', 'p_015', 'p_039', 'p_030', 'p_047'], dtype='<U5')

So, in this case, participant 10 had the smallest average response, while participant 47 had the largest average response.

From boolean to integer indices

Sometimes, we want to use a combination of boolean and array indexing. For example, if we wanted to pull out just the responses for participant 2, a natural approach would be to use boolean indexing:

participant_2_responses = data[participants == 'p_002'] participant_2_responses
array([[ 1519.95398267, 2268.97864618, 1195.65942267, 504.90066814, 1801.02089755, 925.24286169, 1810.30761149, 1325.80705157, 1175.54586424, 10408.41065951, 2455.08935241, 357.09813683, 1772.0844697 , 1196.14813706, 8314.75793716, 2675.68801506, 730.08949799, 425.0148563 , 2402.60428235, 1428.93884317, 227.37915983, 1302.38963652, 1593.64880997, 1806.89076956, 2533.22196284, 1207.73423053, 6303.25912496, 8886.54201129, 994.92553099, 1713.41231842, 401.87479894, 2505.85572837, 1952.73193948, 207.56620328, 603.30726209, 3616.1399997 , 830.73548608, 1068.04701882, 1469.02747094, 6370.22310164, 604.55979416, 9081.14741057, 1965.26863145, 3518.87071627, 517.41562916, 8229.56635964, 461.33763334, 780.20882914, 1650.10585622, 1314.87174398, 1224.22622967, 2083.55702061, 2968.86713293, 1869.42631685, 1111.82126554, 1948.83806694, 2415.18346365, 4444.99151467, 1597.0267606 , 499.87985072, 957.05596729, 1203.67765498, 1664.34221178, 5003.75224736, 967.55566474, 636.93388007, 915.99357265, 1120.80187967, 377.40672475, 1121.26218759, 1816.92139864, 1024.03377902, 1812.41020669, 2212.16494348, 6106.52376689, 1007.71524381, 695.393052 , 407.26961297, 1131.45314218, 3049.94912138, 4876.37266626, 1128.91791729, 1141.67606093, 1135.00006133, 6758.10849749, 1593.05943482, 1283.63625911, 2788.64115941, 581.88359516, 543.51599043, 4434.08247551, 430.26052784, 1070.51847351, 4064.66525915, 1549.92984649, 455.33216248, 1972.20829979, 9066.32915959, 896.51734192, 1314.62319535, 1296.01410683, 996.61616492, 6105.75385258, 1561.19737687, 3383.1386535 , 1292.54120004, 3853.30151832, 835.38429318, 743.8276473 , 1577.88578247, 1306.19949451, 1470.75195203, 555.2578609 , 217.45067187, 1256.68339959, 1186.51907184, 489.1850599 , 1853.43143023, 1995.60458866, 833.29382847, 1034.45330302, 1683.02047154, 545.29081816, 1838.80743097, 906.75426918, 1244.26630466, 1321.3404664 , 1587.52857985, 1410.35585247, 3105.319317 , 1411.51416317, 5443.92972674, 752.0169473 , 586.37613326, 572.36658753, 957.10289119, 1558.40600015, 372.5043349 , 943.38369338, 4163.59377408, 345.63171723, 2984.97324499, 1125.20342404, 919.58874102, 3128.38041682, 1630.50710335, 13257.2010995 , 910.32949952, 1195.78829195, 2170.91665291, 1282.76326571, 827.92918649, 857.79668237, 828.99037595, 934.21570992, 1665.98261859, 897.51378789, 739.37860435, 1804.45818838, 1943.92552284, 927.32634226, 1487.13836925, 5317.54152507, 807.78268149, 1180.09915534, 2179.15666221, 2274.58915959, 5160.42914214, 3252.8608973 , 3566.97537043, 770.46953568, 389.88662928, 281.21668416, 1641.56109359, 5993.78533537, 2193.09567337, 4281.75189016, 857.57255182, 446.91109426, 604.31734423, 1029.6613233 , 2003.21326662, 555.0428431 , 1000.38054076, 782.61377775, 559.70248518, 465.53667091, 1508.97589903, 1690.79653218, 868.60103515, 324.63978712, 3831.94213437, 2389.33460775, 328.24630795, 453.09315188, 667.70706354, 861.83882331, 449.95923305, 4161.35247677, 2838.9239933 , 3866.1204718 , 2132.74921404, 456.46404904, 1444.91289822, 2859.86453994, 625.01703188, 4696.29293529, 1706.16089941, 2351.7652479 , 1460.95979994, 324.1746197 , 623.50157152, 1320.7807782 , 1187.47604577, 3028.69232087, 2572.00444369, 2234.92844359, 2459.0302512 , 4092.38986667, 749.89150303, 1094.74299543, 2241.59126084, 3656.92713699, 2605.88444652, 7354.29967514, 2319.71428619, 2494.45531085, 870.8280167 , 314.48611818, 4831.59363981, 463.49488283, 1220.11219412, 1052.19978516, 3085.03346083, 3183.2792087 , 1348.71168181, 1634.84655526, 659.9016014 , 2063.0466589 , 4471.92194045, 1775.15128191, 2662.54731125, 1201.29944168, 872.85026268, 256.10174776, 660.29442355, 1598.19175661, 1674.52470932, 1009.36470532, 3191.14929254, 2093.43620826, 673.92808951, 442.93059594, 414.55955897, 4675.46550612, 664.7509365 , 693.55749027, 327.38604708, 1024.30201959, 821.20424941, 1012.09661072, 1968.39632341, 512.52087257, 904.05253792, 345.20284479, 1720.46736155, 864.85812872, 1294.54236543, 841.30036722, 2494.56452101, 2126.02112408, 598.89309697, 1177.36784728, 1475.24512926, 2260.77568058, 1753.02932524, 1705.66756459, 1328.4368784 , 791.14544572, 995.85046493, 1049.73139615, 477.47972252, 2460.87899716, 747.58460743, 1214.14766658, 609.2686128 , 966.95494309, 1128.69125645, 351.02567865, 855.90684706, 497.76945168, 2322.1211325 , 3505.39941986, 919.77372431, 532.62433677, 1071.06498294, 1310.0555865 , 2448.01497075, 1075.98009604, 723.05119455]])

Another way that we could do this would be to determine the index of participant 2, and then use that to index into data. To do this, we can use a function called np.argwhere, which returns the indices of elements that are true:

np.argwhere(participants == 'p_002')
array([[26]])

So in this case, we see that participant 2 corresponds to index 26.

Exercise: Averaging responses (2 points)

Write a function that takes as arguments a participant id, the data, and the list of participant names, and computes the average response for the given participant.
Occasionally we will ask you to raise an error if your function gets inputs that it's not expecting. As a reminder, to raise an error, you should use the raise keyword. For example, to raise a ValueError, you would do raise ValueError(message), where message is a string explaining specifically what the error was.
def participant_means(participant, data, participants): """Computes the mean response for the given participant. A ValueError should be raised if more than one participant has the given name. Hint: your solution should use `np.argwhere`, and can be done in four lines (including the return statement). Parameters ---------- participant: string The name/id of the participant data: numpy.ndarray with shape (n, m) Rows correspond to participants, columns to trials participants: numpy.ndarray with shape(n,) A string array containing participant names/ids, corresponding to the rows of the `data` array. Returns ------- float: the mean response of the participant over all trials""" return np.mean(data[participants==participant])
def participant_mean(participant, data, participants): """Computes the mean response for the given participant. A ValueError should be raised if more than one participant has the given name. Hint: your solution should use `np.argwhere`, and can be done in four lines (including the return statement). Parameters ---------- participant: string The name/id of the participant data: numpy.ndarray with shape (n, m) Rows correspond to participants, columns to trials participants: numpy.ndarray with shape(n,) A string array containing participant names/ids, corresponding to the rows of the `data` array. Returns ------- float: the mean response of the participant over all trials """ ### BEGIN SOLUTION if len(np.argwhere(participants == participant)) > 1: raise ValueError('More than one participant with that name') else: return np.mean(data[np.argwhere(participants==participant)]) ### END SOLUTION
# add your own test cases in this cell! participant_mean('p_002')
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-29-559a4a0b3071> in <module>() 1 # add your own test cases in this cell! ----> 2 participant_mean('p_002') TypeError: participant_mean() missing 2 required positional arguments: 'data' and 'participants'
"""Check for correct answers with the example experiment data.""" from numpy.testing import assert_allclose data = np.load("data/experiment_data.npy") participants = np.load("data/experiment_participants.npy") assert_allclose(participant_mean('p_002', data, participants), 1857.7013113499095) assert_allclose(participant_mean('p_047', data, participants), 1906.0651466520821) assert_allclose(participant_mean('p_013', data, participants), 1718.4379910225193)
"""Check for correct answers for some different data.""" data = np.arange(32).reshape((4, 8)) participants = np.array(['a', 'b', 'c', 'd']) assert_allclose(participant_mean('a', data, participants), 3.5) assert_allclose(participant_mean('b', data, participants), 11.5) assert_allclose(participant_mean('c', data, participants), 19.5) assert_allclose(participant_mean('d', data, participants), 27.5)
"""Check that a ValueError is raised when the participant name is not unique.""" from nose.tools import assert_raises data = np.arange(32).reshape((4, 8)) participants = np.array(['a', 'b', 'c', 'a']) assert_raises(ValueError, participant_mean, 'a', data, participants)