| Hosted by CoCalc | Download
Kernel: Python 3 (Anaconda 5)

If you don't have Python, you can use ReplIt:

https://repl.it/languages/Python3

Let's write some text to disk!

# https://repl.it/languages/Python3 f = open('input.txt', mode='w') f.write("Hello World!") f.close() f = open('input.txt', mode='r') print(f.read()) f.close()
Hello World!

There are a lot of fiddly bits to files, how do you look up the options and operations?

help(open) # ? in ipython
Help on built-in function open in module io: open(file, mode='r', buffering=-1, encoding=None, errors=None, newline=None, closefd=True, opener=None) Open file and return a stream. Raise IOError upon failure. file is either a text or byte string giving the name (and the path if the file isn't in the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.) mode is an optional string that specifies the mode in which the file is opened. It defaults to 'r' which means open for reading in text mode. Other common values are 'w' for writing (truncating the file if it already exists), 'x' for creating and writing to a new file, and 'a' for appending (which on some Unix systems, means that all writes append to the end of the file regardless of the current seek position). In text mode, if encoding is not specified the encoding used is platform dependent: locale.getpreferredencoding(False) is called to get the current locale encoding. (For reading and writing raw bytes use binary mode and leave encoding unspecified.) The available modes are: ========= =============================================================== Character Meaning --------- --------------------------------------------------------------- 'r' open for reading (default) 'w' open for writing, truncating the file first 'x' create a new file and open it for writing 'a' open for writing, appending to the end of the file if it exists 'b' binary mode 't' text mode (default) '+' open a disk file for updating (reading and writing) 'U' universal newline mode (deprecated) ========= =============================================================== The default mode is 'rt' (open for reading text). For binary random access, the mode 'w+b' opens and truncates the file to 0 bytes, while 'r+b' opens the file without truncation. The 'x' mode implies 'w' and raises an `FileExistsError` if the file already exists. Python distinguishes between files opened in binary and text modes, even when the underlying operating system doesn't. Files opened in binary mode (appending 'b' to the mode argument) return contents as bytes objects without any decoding. In text mode (the default, or when 't' is appended to the mode argument), the contents of the file are returned as strings, the bytes having been first decoded using a platform-dependent encoding or using the specified encoding if given. 'U' mode is deprecated and will raise an exception in future versions of Python. It has no effect in Python 3. Use newline to control universal newlines mode. buffering is an optional integer used to set the buffering policy. Pass 0 to switch buffering off (only allowed in binary mode), 1 to select line buffering (only usable in text mode), and an integer > 1 to indicate the size of a fixed-size chunk buffer. When no buffering argument is given, the default buffering policy works as follows: * Binary files are buffered in fixed-size chunks; the size of the buffer is chosen using a heuristic trying to determine the underlying device's "block size" and falling back on `io.DEFAULT_BUFFER_SIZE`. On many systems, the buffer will typically be 4096 or 8192 bytes long. * "Interactive" text files (files for which isatty() returns True) use line buffering. Other text files use the policy described above for binary files. encoding is the name of the encoding used to decode or encode the file. This should only be used in text mode. The default encoding is platform dependent, but any encoding supported by Python can be passed. See the codecs module for the list of supported encodings. errors is an optional string that specifies how encoding errors are to be handled---this argument should not be used in binary mode. Pass 'strict' to raise a ValueError exception if there is an encoding error (the default of None has the same effect), or pass 'ignore' to ignore errors. (Note that ignoring encoding errors can lead to data loss.) See the documentation for codecs.register or run 'help(codecs.Codec)' for a list of the permitted encoding error strings. newline controls how universal newlines works (it only applies to text mode). It can be None, '', '\n', '\r', and '\r\n'. It works as follows: * On input, if newline is None, universal newlines mode is enabled. Lines in the input can end in '\n', '\r', or '\r\n', and these are translated into '\n' before being returned to the caller. If it is '', universal newline mode is enabled, but line endings are returned to the caller untranslated. If it has any of the other legal values, input lines are only terminated by the given string, and the line ending is returned to the caller untranslated. * On output, if newline is None, any '\n' characters written are translated to the system default line separator, os.linesep. If newline is '' or '\n', no translation takes place. If newline is any of the other legal values, any '\n' characters written are translated to the given string. If closefd is False, the underlying file descriptor will be kept open when the file is closed. This does not work when a file name is given and must be True in that case. A custom opener can be used by passing a callable as *opener*. The underlying file descriptor for the file object is then obtained by calling *opener* with (*file*, *flags*). *opener* must return an open file descriptor (passing os.open as *opener* results in functionality similar to passing None). open() returns a file object whose type depends on the mode, and through which the standard file operations such as reading and writing are performed. When open() is used to open a file in a text mode ('w', 'r', 'wt', 'rt', etc.), it returns a TextIOWrapper. When used to open a file in a binary mode, the returned class varies: in read binary mode, it returns a BufferedReader; in write binary and append binary modes, it returns a BufferedWriter, and in read/write mode, it returns a BufferedRandom. It is also possible to use a string or bytearray as a file for both reading and writing. For strings StringIO can be used like a file opened in a text mode, and for bytes a BytesIO can be used like a file opened in a binary mode.
dir(f) # ?f.read* in ipython
['_CHUNK_SIZE', '__class__', '__del__', '__delattr__', '__dict__', '__dir__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__lt__', '__ne__', '__new__', '__next__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_checkClosed', '_checkReadable', '_checkSeekable', '_checkWritable', '_finalizing', 'buffer', 'close', 'closed', 'detach', 'encoding', 'errors', 'fileno', 'flush', 'isatty', 'line_buffering', 'mode', 'name', 'newlines', 'read', 'readable', 'readline', 'readlines', 'seek', 'seekable', 'tell', 'truncate', 'writable', 'write', 'writelines']

What do you expect to happen if you read after closing?

What do you expect to happen if you open many files without closing them?

Python Context Managers help us avoid these sorts of mistakes

with open('input.txt') as f: print(f.read()) print(f.closed)
Hello World! True

What do you expect to happen if you read twice?

with open('input.txt') as f: print(f.read()) print(f.read())
Hello World!

Files contain a position that advances as you read and write.

How do you read a file line by line?

coconuts = 'coconuts.txt' with open(coconuts, 'w') as f: f.write("I've got a lovely bunch of coconuts\n") f.write("There they are, all standing in a row\n") f.write("Big ones, small ones, some as big as your head\n") f.write("Give them a twist a flick of the wrist\n") f.write("That's what the showman said") with open(coconuts, 'r') as f: for line in f: print(line)
I've got a lovely bunch of coconuts There they are, all standing in a row Big ones, small ones, some as big as your head Give them a twist a flick of the wrist That's what the showman said

This is great! But it is kind of a blank slate.

How do we make good life choices when storing data?

JSON is a lightweight specification (15 pages) for text based data containing:

  • strings

  • booleans

  • integers

  • floats (including exponential notation)

  • nulls (called None in Python)

  • Objects (called dictionary in Python)

  • Array (called list in Python)

What are these type things, and why do we need them?

  • csv files don't have types

  • Bash (shell) scripts don't have types

Gene name errors are widespread in the scientific literature

Mark Ziemann, Yotam Eren and Assam El-Osta

https://doi.org/10.1186/s13059-016-1044-7

The spreadsheet software Microsoft Excel, when used with default settings, is known to convert gene names to dates and floating point numbers. A programmatic scan of leading genomics journals reveals that approximately one-fifth of papers with supplementary Excel gene lists contain erroneous gene name conversions.

Having ambiguous input/output multiplies the amount of work.

The unix command to list files (ls) needs one flag (-C) for newline seperated output, and another for comma seperated output.

Lots of commands have a flag for every possible interpretation of concepts like list and number.

Poorly specified formats are hard to implement

Many csv parsers fail when values contain commas or newlines

At the lowest level, there is no difference between a string, integer, or float.

import struct s = 'abcd' b = bytes('abcd', 'utf-8') print('string:', s) print('integers:', list(ord(c) for c in s)) print('float:', struct.unpack('f', b))
string: abcd integers: [97, 98, 99, 100] float: (1.6777999408082104e+22,)

Types determine representation and behavior

# The '+' operation varies wildly depending on the relevent type print('conc' + 'atination') print(2 + 3) print('conc' + 3)
concatination 5
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-af1cce258651> in <module>() 2 print('conc' + 'atination') 3 print(2 + 3) ----> 4 print('conc' + 3) TypeError: must be str, not int

List: [ , , ]

  • Mutable sequence

  • Think 'examples of X'

  • Called an Array in JSON, represented the same way

vals = ['1', 1] vals.append(1.0) vals += ['one'] print(vals[3])
one

Tuple ( , , )

  • Immutable sequence

  • Think 'group of associated data'

  • Also look up collections.namedtuple

  • No JSON representation

attributes = ('some', 'associated', 'data') val1, val2, val3 = attributes print(attributes[2])
data

Dictionary { : , : }

  • Unique Key Value Pairs

  • Keys must be immutable

  • Think 'I want to look this up later'

  • Confusingly, called an object in JSON. Represented the same way.

d = {'name': 'Jared', 'date': '2018-10-01'} print(d['name']) del d['name'] print(d)
Jared {'date': '2018-10-01'}

Set { , , }

  • Unique Collection

  • No JSON representation

s = {'one', 'two'} s.add('two') s.remove('one') print(s)
{'two'}

Describe the difference between "123" and 123 in a few sentences.

Why doesn't JSON provide Set and Tuple collections?

Let's load some JSON data!

People in space -- Nathan Bergey

http://api.open-notify.org/astros.json

import json from pprint import pprint in_space = """ { "message": "success", "people": [ {"craft": "ISS", "name": "Oleg Artemyev"}, {"craft": "ISS", "name": "Andrew Feustel"}, {"craft": "ISS", "name": "Richard Arnold"}, {"craft": "ISS", "name": "Sergey Prokopyev"}, {"craft": "ISS", "name": "Alexander Gerst"}, {"craft": "ISS", "name": "Serena Aunon-Chancellor"} ], "number": 6 } """ pprint(json.loads(in_space))
{'message': 'success', 'number': 6, 'people': [{'craft': 'ISS', 'name': 'Oleg Artemyev'}, {'craft': 'ISS', 'name': 'Andrew Feustel'}, {'craft': 'ISS', 'name': 'Richard Arnold'}, {'craft': 'ISS', 'name': 'Sergey Prokopyev'}, {'craft': 'ISS', 'name': 'Alexander Gerst'}, {'craft': 'ISS', 'name': 'Serena Aunon-Chancellor'}]}

Strings

  • Input with json.loads

  • Output with json.dumps

Files

  • Input with json.load

  • Output with json.dump

Add yourself to Space!

Text data is much more robust than binary data

  • Self documenting

  • Much much easier to debug

  • Easier to version

  • Cost doesn't matter for small data

  • Most data is small data

What Doesn't JSON have?

  • Comments

  • Dates

Use Toml/Yaml Instead

Toml vs Yaml is a good argument for minimal data formats

  • YAML is 86 pages

  • Toml is comparable to JSON in size

  • Loading YAML is a security risk by default

  • Lots of variation between parsers. Lots of incomplete implementation.

Things to watch out for:

Writing down something in JSON that can't be represented in the language.

l = json.loads('{"number": 1.6000000000000000000001}') print(l)
{'number': 1.6}

Things to watch out for:

Encoding/Decoding can get expensive

Takeaways!

  • look up file handling options with help, dir, and ?

  • Avoid dangling files and other resources by using context managers (with statement)

  • Use Python types and collections to unambiguously represent your data

  • Look up David Beazly's talk "Builtin Superheros" for more information on collections

  • To make your software more robust, use text data

  • Start with JSON, evolve as needed

License joke -- Douglas Crockford "The Software shall be used for Good, not Evil." "I give permission for IBM, its customers, partners, and minions, to use JSLint for evil."