### CoCalc Runs Your Jupyter Notebooks and Linux Terminals using Powerful CPUs and GPUs!

Python Data Science Handbook

**Views:**

^{91048}

**Kernel:**Python 3

*The text is released under the **CC-BY-NC-ND license**, and code is released under the **MIT license**. If you find this content useful, please consider supporting the work by **buying the book**!*

# Combining Datasets: Concat and Append

Some of the most interesting studies of data come from combining different data sources. These operations can involve anything from very straightforward concatenation of two different datasets, to more complicated database-style joins and merges that correctly handle any overlaps between the datasets. `Series`

and `DataFrame`

s are built with this type of operation in mind, and Pandas includes functions and methods that make this sort of data wrangling fast and straightforward.

Here we'll take a look at simple concatenation of `Series`

and `DataFrame`

s with the `pd.concat`

function; later we'll dive into more sophisticated in-memory merges and joins implemented in Pandas.

We begin with the standard imports:

For convenience, we'll define this function which creates a `DataFrame`

of a particular form that will be useful below:

A | B | C | |
---|---|---|---|

0 | A0 | B0 | C0 |

1 | A1 | B1 | C1 |

2 | A2 | B2 | C2 |

In addition, we'll create a quick class that allows us to display multiple `DataFrame`

s side by side. The code makes use of the special `_repr_html_`

method, which IPython uses to implement its rich object display:

The use of this will become clearer as we continue our discussion in the following section.

## Recall: Concatenation of NumPy Arrays

Concatenation of `Series`

and `DataFrame`

objects is very similar to concatenation of Numpy arrays, which can be done via the `np.concatenate`

function as discussed in The Basics of NumPy Arrays. Recall that with it, you can combine the contents of two or more arrays into a single array:

The first argument is a list or tuple of arrays to concatenate. Additionally, it takes an `axis`

keyword that allows you to specify the axis along which the result will be concatenated:

## Simple Concatenation with `pd.concat`

Pandas has a function, `pd.concat()`

, which has a similar syntax to `np.concatenate`

but contains a number of options that we'll discuss momentarily:

`pd.concat()`

can be used for a simple concatenation of `Series`

or `DataFrame`

objects, just as `np.concatenate()`

can be used for simple concatenations of arrays:

It also works to concatenate higher-dimensional objects, such as `DataFrame`

s:

df1

A | B | |
---|---|---|

1 | A1 | B1 |

2 | A2 | B2 |

df2

A | B | |
---|---|---|

3 | A3 | B3 |

4 | A4 | B4 |

pd.concat([df1, df2])

A | B | |
---|---|---|

1 | A1 | B1 |

2 | A2 | B2 |

3 | A3 | B3 |

4 | A4 | B4 |

By default, the concatenation takes place row-wise within the `DataFrame`

(i.e., `axis=0`

). Like `np.concatenate`

, `pd.concat`

allows specification of an axis along which concatenation will take place. Consider the following example:

df3

A | B | |
---|---|---|

0 | A0 | B0 |

1 | A1 | B1 |

df4

C | D | |
---|---|---|

0 | C0 | D0 |

1 | C1 | D1 |

pd.concat([df3, df4], axis='col')

A | B | C | D | |
---|---|---|---|---|

0 | A0 | B0 | C0 | D0 |

1 | A1 | B1 | C1 | D1 |

We could have equivalently specified `axis=1`

; here we've used the more intuitive `axis='col'`

.

### Duplicate indices

One important difference between `np.concatenate`

and `pd.concat`

is that Pandas concatenation *preserves indices*, even if the result will have duplicate indices! Consider this simple example:

x

A | B | |
---|---|---|

0 | A0 | B0 |

1 | A1 | B1 |

y

A | B | |
---|---|---|

0 | A2 | B2 |

1 | A3 | B3 |

pd.concat([x, y])

A | B | |
---|---|---|

0 | A0 | B0 |

1 | A1 | B1 |

0 | A2 | B2 |

1 | A3 | B3 |

Notice the repeated indices in the result. While this is valid within `DataFrame`

s, the outcome is often undesirable. `pd.concat()`

gives us a few ways to handle it.

#### Catching the repeats as an error

If you'd like to simply verify that the indices in the result of `pd.concat()`

do not overlap, you can specify the `verify_integrity`

flag. With this set to True, the concatenation will raise an exception if there are duplicate indices. Here is an example, where for clarity we'll catch and print the error message:

#### Ignoring the index

Sometimes the index itself does not matter, and you would prefer it to simply be ignored. This option can be specified using the `ignore_index`

flag. With this set to true, the concatenation will create a new integer index for the resulting `Series`

:

x

A | B | |
---|---|---|

0 | A0 | B0 |

1 | A1 | B1 |

y

A | B | |
---|---|---|

0 | A2 | B2 |

1 | A3 | B3 |

pd.concat([x, y], ignore_index=True)

A | B | |
---|---|---|

0 | A0 | B0 |

1 | A1 | B1 |

2 | A2 | B2 |

3 | A3 | B3 |

#### Adding MultiIndex keys

Another option is to use the `keys`

option to specify a label for the data sources; the result will be a hierarchically indexed series containing the data:

x

A | B | |
---|---|---|

0 | A0 | B0 |

1 | A1 | B1 |

y

A | B | |
---|---|---|

0 | A2 | B2 |

1 | A3 | B3 |

pd.concat([x, y], keys=['x', 'y'])

A | B | ||
---|---|---|---|

x | 0 | A0 | B0 |

1 | A1 | B1 | |

y | 0 | A2 | B2 |

1 | A3 | B3 |

The result is a multiply indexed `DataFrame`

, and we can use the tools discussed in Hierarchical Indexing to transform this data into the representation we're interested in.

### Concatenation with joins

In the simple examples we just looked at, we were mainly concatenating `DataFrame`

s with shared column names. In practice, data from different sources might have different sets of column names, and `pd.concat`

offers several options in this case. Consider the concatenation of the following two `DataFrame`

s, which have some (but not all!) columns in common:

df5

A | B | C | |
---|---|---|---|

1 | A1 | B1 | C1 |

2 | A2 | B2 | C2 |

df6

B | C | D | |
---|---|---|---|

3 | B3 | C3 | D3 |

4 | B4 | C4 | D4 |

pd.concat([df5, df6])

A | B | C | D | |
---|---|---|---|---|

1 | A1 | B1 | C1 | NaN |

2 | A2 | B2 | C2 | NaN |

3 | NaN | B3 | C3 | D3 |

4 | NaN | B4 | C4 | D4 |

By default, the entries for which no data is available are filled with NA values. To change this, we can specify one of several options for the `join`

and `join_axes`

parameters of the concatenate function. By default, the join is a union of the input columns (`join='outer'`

), but we can change this to an intersection of the columns using `join='inner'`

:

df5

A | B | C | |
---|---|---|---|

1 | A1 | B1 | C1 |

2 | A2 | B2 | C2 |

df6

B | C | D | |
---|---|---|---|

3 | B3 | C3 | D3 |

4 | B4 | C4 | D4 |

pd.concat([df5, df6], join='inner')

B | C | |
---|---|---|

1 | B1 | C1 |

2 | B2 | C2 |

3 | B3 | C3 |

4 | B4 | C4 |

Another option is to directly specify the index of the remaininig colums using the `join_axes`

argument, which takes a list of index objects. Here we'll specify that the returned columns should be the same as those of the first input:

df5

A | B | C | |
---|---|---|---|

1 | A1 | B1 | C1 |

2 | A2 | B2 | C2 |

df6

B | C | D | |
---|---|---|---|

3 | B3 | C3 | D3 |

4 | B4 | C4 | D4 |

pd.concat([df5, df6], join_axes=[df5.columns])

A | B | C | |
---|---|---|---|

1 | A1 | B1 | C1 |

2 | A2 | B2 | C2 |

3 | NaN | B3 | C3 |

4 | NaN | B4 | C4 |

The combination of options of the `pd.concat`

function allows a wide range of possible behaviors when joining two datasets; keep these in mind as you use these tools for your own data.

### The `append()`

method

Because direct array concatenation is so common, `Series`

and `DataFrame`

objects have an `append`

method that can accomplish the same thing in fewer keystrokes. For example, rather than calling `pd.concat([df1, df2])`

, you can simply call `df1.append(df2)`

:

df1

A | B | |
---|---|---|

1 | A1 | B1 |

2 | A2 | B2 |

df2

A | B | |
---|---|---|

3 | A3 | B3 |

4 | A4 | B4 |

df1.append(df2)

A | B | |
---|---|---|

1 | A1 | B1 |

2 | A2 | B2 |

3 | A3 | B3 |

4 | A4 | B4 |

Keep in mind that unlike the `append()`

and `extend()`

methods of Python lists, the `append()`

method in Pandas does not modify the original objectâ€“instead it creates a new object with the combined data. It also is not a very efficient method, because it involves creation of a new index *and* data buffer. Thus, if you plan to do multiple `append`

operations, it is generally better to build a list of `DataFrame`

s and pass them all at once to the `concat()`

function.

In the next section, we'll look at another more powerful approach to combining data from multiple sources, the database-style merges/joins implemented in `pd.merge`

. For more information on `concat()`

, `append()`

, and related functionality, see the "Merge, Join, and Concatenate" section of the Pandas documentation.