diff --git a/Python/Module3_IntroducingNumpy/VectorizedOperations.ipynb b/Python/Module3_IntroducingNumpy/VectorizedOperations.ipynb index eecb0fb8..00d9b3ea 100644 --- a/Python/Module3_IntroducingNumpy/VectorizedOperations.ipynb +++ b/Python/Module3_IntroducingNumpy/VectorizedOperations.ipynb @@ -705,8 +705,125 @@ "metadata": {}, "source": [ "## Linear Algebra\n", - "Lastly, we note that NumPy provides a suite of functions that can perform optimized computations and routines relevant to linear algebra. Included here are functions for performing matrix products and tensor products, solving eigenvalue problems, inverting matrices, and computing vector normalizations. Please refer to the [official NumPy documentation](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.linalg.html) for a full listing of these functions.\n", - "\n" + "Lastly, we note that NumPy provides a suite of functions that can perform optimized computations and routines relevant to linear algebra. Included here are functions for performing matrix products and tensor products, solving eigenvalue problems, inverting matrices, and computing vector normalizations. Please refer to the [official NumPy documentation](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.linalg.html) for a full listing of these functions. We highlight a few below\n", + "\n", + "### Dot Product\n", + "```python\n", + "# Compute the inner-product of two 1-D vectors\n", + ">>> a = np.arange(1, 4)\n", + ">>> b = np.arange(4, 7)\n", + ">>> np.dot(a, b)\n", + "32\n", + "```\n", + "With an N-D vector `a` and a 1-D vector `b`, the product is computed over the last axis of a.\n", + "```python\n", + ">>> a = np.array([[1, 2], [3, 4], [5, 6]])\n", + ">>> b = np.array([4, 1])\n", + ">>> np.dot(a, b)\n", + "array([ 6, 16, 26])\n", + "```\n", + "\n", + "### Matrix multiplication\n", + "The numpy matrix multiplication operator `np.matmul` is unique because it does not operate on arrays of arbitrary dimension. Instead, it prefers to operate on 2-D arrays. Accordingly\n", + "\n", + "* Arguments to np.matmul cannot be scalars\n", + "* If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.\n", + "* If the first argument is 1-D, it is promoted to a matrix by prepending a 1 to its dimensions. After matrix multiplication the prepended 1 is removed.\n", + "* If the second argument is 1-D, it is promoted to a matrix by appending a 1 to its dimensions. After matrix multiplication the appended 1 is removed.\n", + "\n", + "```python\n", + "# Compute the product of two matrices\n", + ">>> a = np.arange(1, 4).reshape(3, 1)\n", + ">>> b = np.arange(4, 7).reshape(1, 3)\n", + ">>> np.matmul(a, b)\n", + "array([[ 4, 5, 6],\n", + " [ 8, 10, 12],\n", + " [12, 15, 18]])\n", + "```\n", + "The @-operator is equivalent to np.matmul and is defined if either argument is a numpy array\n", + "```python\n", + ">>> a @ b\n", + "array([[ 4, 5, 6],\n", + " [ 8, 10, 12],\n", + " [12, 15, 18]])\n", + ">>> a @ [[1, 2, 3]]\n", + "array([[1, 2, 3],\n", + " [2, 4, 6],\n", + " [3, 6, 9]])\n", + "```\n", + "\n", + "### np.einsum\n", + "`np.einsum` is a very powerful tool available in Numpy that uses [**Ein**stein **sum**mation notation](https://en.wikipedia.org/wiki/Einstein_notation), where repeated indices are summed over to compute the final product. For example, you can use Einstein notation to compute a matrix outer product like so\n", + "```python\n", + ">>> a = np.arange(1, 4).reshape(3, 1)\n", + ">>> b = np.arange(4, 7).reshape(1, 3)\n", + ">>> np.einsum('ij,jk->ik', a, b)\n", + "array([[ 4, 5, 6],\n", + " [ 8, 10, 12],\n", + " [12, 15, 18]])\n", + "```\n", + "It's flexible notation means you can basically perform any arbitrary operation on a pair of numpy arrays. As an example, consider the folling arrays\n", + "\n", + "```python\n", + ">>> a = np.arange(4)\n", + ">>> b = np.arange(12).reshape(3, 4)\n", + "```\n", + "\n", + "These arrays cannot be broadcast together, so if I want to multiply them together, I need to add a new axis to array `a`.\n", + "\n", + "```python\n", + ">>> a * b\n", + "ValueError: operands could not be broadcast together with shapes (3,) (3,4) \n", + ">>> a.reshape(3, 1) * b\n", + "array([[ 0, 1, 2, 3],\n", + " [ 8, 10, 12, 14],\n", + " [24, 27, 30, 33]])\n", + "```\n", + "\n", + "One of the main problems with the approach is that a new copy of `a` is created before the multiplication is performed. When `np.einsum` is used, however, no copy is made and the result is computed as the operator traverses the axes. This causes `np.einsum` to be faster _and_ use less memory than the example above.\n", + "\n", + "```python\n", + ">>> np.einsum('i,ij->ij', a, b)\n", + "array([[ 0, 1, 2, 3],\n", + " [ 8, 10, 12, 14],\n", + " [24, 27, 30, 33]])\n", + "```\n", + "\n", + "This example can be extended further with operations that take place on the resulting array (e.g. summing over rows and columns in the result).\n", + "\n", + "```python\n", + "# Sum over rows\n", + ">>> np.einsum('i,ij->j', a, b)\n", + "array([32, 38, 44, 50])\n", + "\n", + "# Sum over columns\n", + ">>> np.einsum('i,ij->i', a, b)\n", + "array([ 6, 44, 114])\n", + "```\n", + "\n", + "For more information on `np.einsum`, check out the [official documentation](https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.einsum.html) or [this handy guide](http://ajcr.net/Basic-guide-to-einsum/).\n", + "\n", + "### np.linalg\n", + "\n", + "[np.linalg](https://docs.scipy.org/doc/numpy-1.13.0/reference/routines.linalg.html) is another handy library to be familiar with. It contains many utilities for accomplishing common tasks in linear algebra, including [solving systems of linear equations](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html) and [computing the eigen-values and eigen-vectors for a matrix](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.linalg.eig.html#numpy.linalg.eig). A few examples are highlighted below.\n", + "\n", + "```python\n", + "# Get the Euclidean norm of a vector\n", + ">>> a = np.arange(12)\n", + ">>> a\n", + "array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])\n", + ">>> np.linalg.norm(a)\n", + "22.494443758403985\n", + "```\n", + "`np.linalg` even has its own exception type useful for catching errors related to the shape or contents of the matrix argument. For example, `np.linalg.inv` throws an error if the provided matrix is not square.\n", + "```python\n", + "# Find the inverse of a matrix\n", + ">>> a = np.arange(1, 5)\n", + ">>> np.linalg.inv(a) # numpy.linalg.linalg.LinAlgError\n", + ">>> np.linalg.inv(a.reshape(2, 2))\n", + "array([[-2. , 1. ],\n", + " [ 1.5, -0.5]])\n", + "```" ] }, { @@ -815,7 +932,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.6.3" + "version": "3.6.2" } }, "nbformat": 4,