gitignore
ApproachesIn terms of .gitignore
files there are at least two
schools of thought - one in which .gitignore
files within a
project should indicate any file which should not be included in source
control and another which is more focused on defining those files which
should be ignored that are generated by the project itself. The implicit
difference is that the former allows for files which are the result of
whatever tools people happen to be using, and the latter is of the
mindset that the output of such tools should be ignored through
configuration of their execution environment rather than the
project.
The former is easier as it provides a single mechanism to record a
file to be ignored. I certainly gravitate towards the latter since then
the .gitignore
files themselves can become a useful source
of information about the project rather than become littered with
information about whatever tools may have been used at some point. I
therefore want to make sure that my global settings are configured to
ignore the appropriate files.
As I think about trying to configure git, I realize I first need to
circle back to the question of how I'll configured bash. I'll borrow an
approach I've used in the past which is similar to how many Linux
systems work (at least Gentoo) which will make use of a directory of
files which will be loaded (named by my convention bash.d
).
This promises simplicity over simply updating a single shell profile
file in that each desired piece of functionality can live in a separate
file, and therefore each file can simply be written rather than working
through merging a single file.
This pattern is not readily supported by bash, but a small function can fill that gap. I've gone down the past of building out logic and patterns to support management of the bash profile, but ultimately it doesn't seem worth the effort.
source_dir
Pass a directory
where all .bash
and
.sh
files within that directory will be sourced (just the
immediate specified directory).
The parameter is quoted in case it contains a space, but the rest of
the string is left unquoted to allow for glob expansion. The
nullglob
option is enabled so that lack of files does not
produce an error.
source_dir() {
shopt -s nullglob
for f in "${1}"/*.sh "${1}"/*.bash; do
. "${f}"
done
}
source_dir
The function itself needs to be loaded and then can be used to load
the other files in the directory. With a naive setup this leads to that
function being loaded twice, but there's no evident pragmatic reason to
care that that happens so it will just be ignored. The text below will
therefore be added to /.bashrc
manually as it does not seem
worth automation.
. ~/.bash.d/source_dir.sh && source_dir ~/.bash.d
exec-path-from-shell
Having recently re-adopted Emacs I'll be pull in a flavor of
exec-path-from-shell
(1) so that everything works on macs. The
library is nice and simple, where likely my biggest concern is that it's
largely a combination of fairly general behavior all of which is defined
locally by the library (a pattern typical of Emacs code and other
ecosystems like C). A couple years ago I refactored the code to be have
a clearer split between general and specialized functionality within
elisp, but given that I didn't go so far as to apply the general
functionality elsewhere I didn't get so far as to establish any
benefit.
This time I'm viewing the functionality from the perspective of Data Precedence - the core logic can be reduced to evaluating expressions in a particular environment (a login shell) and parsing the results back for use by Emacs. Rather than expecting elisp to handle both sides of that it seems better to split things out such that an expression language and exchange format can be defined to divide the behavior and therefore the overall interaction becomes a more typical IPC (and therefore the core aspect of this feature is making sure to use a login shell.
The exchange format is an easier question - I'll start with JSON as it is widely supported (including within Emacs). This should be negotiable in the long term which will be worked on in subsequent increments. The expression language warranted a bit more thought. What I've thought in the past and am landing on now is that I want something like YAML (or JSON) with the ability to embed expressions. A clear language for JSON with embedded expressions (though it took me a bit to be reminded of that fact) is JavaScript. I'll likely adopt a constrained subset of the language, and will probably start with trying out the Elk interpreter (one of my biggest hesitations about JS is the size of the major engines). While some other languages appeal more to me personally, I typically enjoy the JS language (though I find certain aspects of the surrounding culture a bit irksome.