I inquired about O’Reilly Atlas (using the “Invite Me” button on their main site, and they emailed back and set up the conference call) because it looks like it could be good for so many things that I work with on a regular basis for digital scholarship, digital humanities, and other single-source needs for web and print projects, like annotated editions. O’Reilly is in private beta for O’Reilly Atlas, “A powerful authoring platform built on Git” per the website https://atlas.oreilly.com/. The site slides/flier on O’Reilly Atlas http://oreillyconnect.com/view/mail?iID=QRLH2FECJ4AUUCK623LY make it look like a great option for digital scholarship projects including annotated editions and book-form or printed catalog-forms for online exhibits, teaching guide books which often need to be both web-based/born and printable, conference proceedings, academic books (like those published by university presses), and others like electronic theses and dissertations (which are often written with MSWord templates and very specific on formatting, which is great for a professional and polished looking final product, but which can be painful to create).
Right now, the tools I recommend for all of these vary based on the people and project needs, which isn’t an ideal situation for me or for building a cultural awareness and critical capacity for single sourcing print/web/mobile/many-device publications. I’m hoping that O’Reilly Atlas could help as one solution or tool in a garage of many for these needs. Plus, O’Reilly has a good reputation and track record for doing good/smart things that can be affordable and usable by academic communities, so it may be a fit for others even if it isn’t for the projects and work that I do.
In the conference call with O’Reilly Atlas, I asked if it was okay to blog about the call and they said it was. I wanted to include this with specifics on what and why I had the call because I’m trying to remember to share more about my processes and activities when I’m trying to get more information on software/tools and on evaluating software/tools. I often don’t share about software that I’m testing or on the testing process simply because I’m focused on spending time on testing. And, because a lot of software simply isn’t where it needs to be–for usability, total cost of ownership, functional and non-functional requirements, and all of the other considerations that come into play for evaluating software for fitness and appropriateness–testing and evaluation takes up a lot of critical time. I’m always happy when other folks share reports on their software/tool tests and so I’ve been trying to do this more often, and trying to do more of it for the full process. The process of evaluation and sharing of evaluative results and reasoning is a very useful contribution that digital humanists and others make to the larger academic and communities, so I’m trying to remember to better support this as well.