Julianne Bell is an intern employing Omeka in her current project. She is working on a project with a rich database of Pre-Modern Execution Ballads.
Julianne is a second year PhD student in Cultural Materials Conservation at the Grimwade Centre at the University of Melbourne. Her thesis is on the management of the deterioration of plastics in museums from an Arts and Science perspective. She is one of 11 interns at the Digital Studio. She is working on the project ‘Execution Ballads in Pre-Modern Europe’, supervised by Dr Una McIlvenna, Hansen Lecturer in History at the University of Melbourne. Dr McIlvenna has collected an impressive range of original data on ballads in early modern and nineteenth-century Europe in a variety of formats.

Julianne Bell
‘The project was my first experience with Omeka’, said Julianne. ‘I was a little daunted at first to learn a brand new program, but I found that Omeka has such a user friendly interface… After the training session that Tyne led, I was able to start developing the database almost straight away’.Julianne is keen to use these skills in the project: ‘it relates directly to what I’m doing for the internship project… The project includes textual data, images, sounds, as well as items such as people, locations, and events and it’s all really easily collated and managed together’. Julianne sees a number of benefits to using Omeka, ‘while you develop the database, you’re also developing the public facing output at the same time. That seems like getting two jobs done at once.’What’s in the box… categorising and digitising objects for the @digitalstudioUM @omeka training @tynedaile @ResPlat pic.twitter.com/zvFY5ANte4
— Kim Doyle (@kim_doyle1)
May 28, 2018

Dr Stephen Giugni
OAM Associate Director, Research Platform Services.
Making sense from data, the data deluge, big data, sharing data, publishing data, data, data, data….
Research is increasingly becoming a data driven activity – but data is only the starting point of our research journey. Valuable in its own right, but that value increases as we bring understanding to our data, to interpret it, to combine it with other sources of information to enhance its ability to unlock the mysteries of our universe, our environment, our social interactions, our ability to make better decisions.
This issue of our newsletter explores the ways that the University of Melbourne research community are using our data management tools and platforms to bring life and meaning to data. Something as simple as grouping or graphing data can provide enormous insight into aspects of human endeavour. For example, we can explore data by bringing together collections to create an exhibition that can bring a focus on an area of research, or we can provide a platform to explore possibilities. Join us as we explore the world of data.
Dr Stephen Giugni
OAM
Associate Director, Research Platform Services.
w: http://research.unimelb.edu.au/infrastructure/research-platform-services

Professor Rachel Fensham
Assistant Dean, Digital Studio
The Digital Studio is delighted to be a part of the Research Platform Services community. The Studio is a collaborative space in the Faculty of Arts supporting researchers working in the digital humanities, arts and social sciences. We are building a community of digital humanists here at the University through a diverse range of programs and activities, including an internship scheme tackling research data challenges for more than 10 projects. These projects range from organising the textual data of execution ballads in Omeka to data scraping and network analysis of Twitter content. We are also collaborating with industry partners such as Regional Arts Victoria, Creative Victoria, Lucy Guerin Inc, Digital Heritage Australia, and ACMI-X.
The Studio itself is a technically resourced workspace in Arts West which hosts seminars, visiting fellows, training and laboratory sessions, and project exhibitions. In semester two, we are excited to be hosting a Visiting Fellow from Leeds University, and will be working with Research Platforms, and other partners SCIP and Researcher@Library, to host data curation, training and workshops. We will also be presenting a Digital Heritage seminar series that explores cutting edge practices and issues in the Galleries, Libraries and Museum Sector.
Keep an eye on our events page or sign up to our email newsletter to find out what’s happening in the Studio. Or if you’re a researcher working on a digital HASS project feel free to get in touch via email digital-studio@unimelb.edu.au.
Professor Rachel Fensham
Assistant Dean, Digital Studio
by Emilie Walsh
The best part of working for Research Platform services as a CAD and 3D printing ResCom*, is that I get to meet researchers working with 3D in all disciplines. Reagan has been helping with my trainings for a few months now.
Awesome first #fusion360 training yesterday with @JongEric & @reaganks! So good to see interest in #CAD for a variety of #researchers! @ResPlat @unimelb pic.twitter.com/AvGo3O0Hfu
— Emilie Walsh (@emilouwalsh)
1 May 2018
If you are interested in learning 3D modelling yourself, come to my next training:
https://fusion-360-july-2018.eventbrite.com.au
Reagan graduated from his Master in Engineering last year and is now working on an exciting project using his skills in 3D modelling and 3D printing!

Reagan Kurniadwiputra Susanto
Tell us about your background and experience at the University of Melbourne ?
I did Bachelor of Science majoring in Bioengineering Systems and Master of Engineering majoring in Biomedical Engineering. During my undergraduate studies, I was quite active in Indonesian Student Association both in the University of Melbourne and in the state of Victoria. However, during my master’s degree, I was more focused on my academic skills and tried to do something more relevant to my field. I joined the Melbourne University Racing team as a Junior Engineer on the Low Voltage team and also participated in Student Ambassador Leadership Program from the School of Engineering.
Tell us about your interest in 3D printing?
It was all started from a subject called Biomaterials, where I have to design a spinal implant for people with specific lower back problem. I had to 3D print the implant and vertebrae with 1:1 scale to visualise the results. This has caught my interest on 3D printing since I can hold something that I designed myself and it was very quick and cheap. Straight after that subject concluded, I bought myself a 3D printer to kick-off my journey in learning and playing around a printer.
I have find that 3D printing is very well supported by the online community. I have teach myself with some very useful skills though some online resources:
- Make Anything - it’s a creative works channel, he posted a lot of fun and functional things, very very inspiring
- Makers Muse - 3d printing reviews, tutorials, etc., he’s Australian and quite popular amongst 3D printing community
- RCLifeOn - mixed of 3D printing, Remote Controls, drones etc. but his 3D printing works are very creative and functional
I am quite interested in utilising 3D printing for rapid prototyping. For example, creating a box for electronics, a rig to simulate breathing, a bracket to join mechanical structures, and a custom-made holder for a very specific purpose.
Reagan breathing box! Gif of the first prototype
3D printing is definitely not the best tool, but knowing how to utilise the technology in combination with other techniques will definitely create something unique and interesting.

Reagan’s Second prototype of the breathing machine.
What skills did you learn during your Master?
Basically general engineering skills like programming and project management as well as something more specialised like electronics, signal processing, 3D modelling/printing, and medical device commercialisation. Throughout the degree, I also learn organisational and leadership skills through student clubs.

Reagan helping out during one of my Fusion 360 training.
So you graduated! Tell us about your current project?
I am working on a MedTech Startup developing a respiratory rate monitoring system for hospital patients. Currently, we are at a very initial stage where we still developing the product and the commercialisation plan. We hope to get the device to the market and helping clinicians to save their valuable time. This device could also potentially reduces hospitals costs related to adverse events.
How the skills learnt in research can be apply to industry?
The technical skills are definitely useful during device development. The skills will help to make a proof of concept and reduce significant cost in development. The most important skill is the problem solving skills that any researcher gained from its training, at least it helps me to be able to prioritise and make important decisions!
If like Reagan you are interested in 3D modelling and 3D printing come to one of my workshop and learn CAD with researchers from all disciplines. Check our calendar of training here:
https://www.eventbrite.com.au/o/research-platforms-services-10600096884
*ResCom : Research Community Coordinator
Neil Killeen (Research Platform Services), Donna Hensler and Nicolette Freeman (VCA Film and Television)
The University of Melbourne’s VCA Film and Television is Australia’s longest continuing film school. Its high-value (historical, cultural, research, teaching), 50-year old Student Film Archive was originally only stored on transient and largely inaccessible media such as celluloid film and magnetic media.
The VCA took initial steps many years ago to digitise and preserve their collection, however it wasn’t until 2015, with the approach of the school’s 50th birthday that the Film School was able to start the process of digitising 50 significant film titles to celebrate and promote its uniquely historical moving image collection. That same year, the archive was recognised as a Cultural Collection by the University. This was achieved based on a Significance Assessment, which made the case for the archive’s value and research, teaching, learning and engagement potential. A digitisation investment grant from Film Victoria then enabled a cultural partnership to be entered into with ACMI, who took on the digitisation of the celluloid films in the collection.
Subsequently, in 2017 and 2018, and facilitated by a grant from the Australian National Data Service (see http://www.ands.org.au) the VCA, Research Platform Services (ResPlat), and commercial partner Arcitecta (supplier of the Mediaflux data operating system - see http://www.arcitecta.com) have since transformed the digitised films into a highly-curated, meta-data rich film archive, accessible via a specialised and re-usable Audio-Visual Archive Portal operating in ResPlat’s Mediaflux data management platform. This work is an essential part of the on-going process to ensure maximum University, National and International use of- and leverage from this important collection. Most recently, this visual showcase has received a further grant from the University’s Student Services Amenities Fund which will further enhance the interface and functionality.

The picture shows a box of digitised USB drives which are being uploaded and the screen shows a few of the films via the current AV Archive interface. Films are stored in 3 formats: the master (JPEG2000 – preservation quality), the mezzanine (Pro Res 422 - for editing and exhibition) and the H.264 proxy for streaming via web browsers. Interface functionality includes the ability for users to upload films, view films, upload associated artefacts such as stills, store rich descriptive meta-data, create playlists and tag film segments for later discovery and viewing.
Neil Killeen, Wei Liu (Research Platform Services), and Andrew Leis (Bio 21)
The era of big data is truly upon us and a current example of this at the University of Melbourne is the new Cryogenic Electron Microscope situated in the Bio21 Facility. This instrument is now fully operational and capable of producing up to about 14TB per day. More typically, it produces a few TB per day. It is expected that in 2 years time, there will be a fleet of these electron microscopes producing about 0.5 PB of data per week!
Major challenges for all organisations operating large-data generating instruments include where and how to store data long term, how to get data efficiently to that storage, and how researchers will access and process that data. The cost burden of storing data, typically for a minimum of 5 years, is very substantial and new paradigms involving pre-processing and discarding of data will be required.
Over the last 6 months, Research Platform Services (ResPlat) has worked with the Electron Microscope Platform (led by Assoc. Professor Eric Hanssen) to handle the data from this first instrument. The system in place, which is reusable for other instrument contexts, utilises ResPlat’s primary data management platform (DMP) built with Arcitecta’s Mediaflux system. The DMP operates in the University’s main data centres and data are streamed to it in quasi real time from a workstation (which also does some pre-processing) located with the electron microscope.

Figure 1 - The upload pattern
The upload software client is generic and reusable. The data is uploaded directly into a ‘project’ accessible by (only) the research team acquiring the data. Data may be stored in the project transiently or persistently (for long-term management).
The researchers can then access their data in a multitude of ways from the DMP as seen in Figure 2. The DMP offers a very flexible multi-protocol environment so that users can pick the method that works best for them. Whilst it is possible to create fast networks connecting instruments to the data centre, users of instruments like this may be from anywhere (e.g. other Australian institutions or international). Therefore, a challenge for delivering data to users is the fact that their network throughput may be quite poor.

Figure 2 - The download pattern
In Figure 2, the user represented can log in to the DMP. However, we have also developed download solutions where shareable links are despatched to researchers who don’t have accounts (perhaps a one-off use). For big data, ResPlat has developed shareable links that don’t download the data directly (also available) but download a “download manager” which itself fetches the data restartably and in parallel.
Finally, the pattern above in FIg.2 is largely a ‘copy out’ pattern. This means the data are duplicated and this just adds to the time and cost burden, especially for big data. ResPlat are exploring ideas around ‘in-situ’ access (like SMB but more scalable) so that users of say, a High Performance Compute facility can access their data directly in the DMP and compute on it. In this way, data can be stored and managed, as well as directly integrated with processing environments, so as to minimise time and costs.
In recent years, digital Humanities, Arts and Social Sciences research has enjoyed a surge in interest, critical attention, promotion and funding.

One offshoot of this growth is the Digital Studio Graduate Internship Scheme. At the end of May, the recently-selected Digital Studio interns attended a workshop on Omeka, an open-source web publishing platform for the display of cultural heritage objects.
In addition to learning the ins and outs of building a basic Omeka site, the interns generated detailed and insightful discussion around such topics as: copyright, provenance, research data management, graphic design and digital curation.
Several of the interns are employing Omeka in their current projects. Recent months have seen exciting new updates by the Omeka team. Most recently, the release of Omeka S has responded to the growing need for HASS researchers to connect across collections and repositories. Omeka S is a next-generation web publishing platform for institutions interested in connecting digital cultural heritage collections with other resources online. First released in November 2017, it has already been widely adopted and developed by the Omeka community..@ResPlat’s @tynedaile talking metadata with the @digitalstudioUM interns pic.twitter.com/iUV1Hkcgq7
— Kim Doyle (@kim_doyle1)
May 28, 2018

Beyond the Digital Studio Internship program, there is also lots of exciting work happening in the Omeka-space. Last month Research Platform Services ran a meetup focused on displaying 3D objects online with Omeka. In the coming weeks, there will be more opportunities to discuss the latest developments in the online cultural collections space with a meetup to explore ‘Omeka and Copyright’ as well as a workshop on Research Data Management as relevant to the online display of HASS research.
To hear more about these exciting events and opportunities or to sign up, get in contact with Dr Tyne Sumner, our Senior Research Community Coordinator at ResPlat.
Tyne runs trainings and events to support Omeka as well as a range of other initiatives designed to increase engagement in digital HASS research at the University of Melbourne and beyond.
Tyne is also involved in consulting and engagement work on the exciting new Humanities, Arts and Social Sciences Data Enhanced Virtual Laboratory (HASS DEVL). The HASS DEVL is a national collaborative project that aims to lower the barriers to entry for digital infrastructure to support HASS research, increase interoperability between existing platforms and deliver skill-building opportunities across the HASS sector.

Get in touch with Tyne if you’d like to know more about the HASS DEVL project or to be involved.
Sign up for the next Introduction to Omeka training here.
We have spoken to PhD candidate Emad Alghamdi from the School of Languages and Linguistics about his research project.

What is your PhD Research Project?
I am trying to answer a deceptively simple question: what makes a video complex or difficult for language learners. My study of that question is very challenging at many levels, for one, I am dealing with a dynamic and multifaceted type of data, videos.
What prompted you to choose this research topic?
Before starting my PhD, I worked as an English teacher for non-native learners for almost six years. As a teacher, I came to know that students like to watch videos and they learn much from watching videos. But whenever I looked for videos on the Internet, it was always a challenge to find videos that are not too difficult for my students. The process was tedious and time-consuming and I always wished that if there was an automated tool that can help me find the right videos with less effort. So I decided to take up the challenge and build one for myself and for all language teachers and practitioners.
What are some of the challenges you have faced or overcome in your research project to date?
With the aim of developing a prediction model of video complexity, I searched for an approach that could help me make sense of the data (videos) and I found Machine Learning to be the most appropriate approach for the task. But Machine Learning is an emerging and active field and it is very challenging to keep up with the recent approaches and techniques.
Another challenge I faced is that I could not find a video dataset that I can play (experiment) with. So I built a video dataset myself and thought I overcame my biggest challenge. Not long after I started analysing my data, I knew I had a very challenging problem on my hands. Hopefully, I’ll get through the analysis phase soon.
What digital tools do you use to use to help analyse your research data?
To remind you, I am analysing videos (a lot of them) which are generally made of three components: language, picture and sound. I use different tools for each component. At the moment, I am focusing on analysing the language component using advanced NLP tools such as TAACO, TALLES, and Coh-Matrix.
I am also using many great Python libraries for data pre-processing, presentation, and visualisation such as NumPy, Pandas, Matplotlib, and Seaborn. For building ML models, I have been exploring Scikit-learn and TensorFlow.
Have you attended any workshops at the university to learn how to use the digital tools you need?
I am SO fortunate to be a resident at Digital Studios where all fascinating workshops and seminars are happening. I have learnt a lot from attending those workshops and others organised by Research Platform Services. I recommend every student to benefit from such wonderful workshops. You never know what doors these workshops may open to you.
Sweave and knitr are engines for generating reports with R which are elegant, flexible, and fast dynamic. Sweave is a package in R that enables integration of R code into LaTeX or LyX documents. Developed by Yihui Xie, knitr combines features from Sweave with other add-on packages to enable integration of R code into not only LaTeX and LyX, but also Markdown, HTML, bookdown and other document types.
Access the documentation on knitr at yihui.name/knitr.
On the 30th of May, 2018 we held a meetup to introduce the R community to knitr and Sweave, and showcase some of their applications. During the event, R-markdown, HTML, LaTeX and bookdown examples were presented. Keep reading to hear about each of the presentations given at the event!
Ready to make elegant and replicable documents with all you R-data-analysis in it? Come to our R-meetup to find out how! @ResPlat @MeirianLThttps://t.co/2UsKZFSDw7
— Pablo Franco (@jpablofranco)
May 17, 2018
Meirian began the event by introducing Sweave and knitr, and demonstrating how to easily create a R-markdown document in RStudio.
R-markdown enables you to create documents that contain all your code and results, making your data analysis entirely reproducible. However, R-markdown can do much more than that! It allows you to create a script with all your data analysis pipeline, which you can play with as you try different statistical or visualisation methods. The great advantage, as opposed to standard R-scripts, is that you can add text amongst your code to create amazing documents that you can personalise with just a few clicks! Pablo Franco @jpablofranco
There are a few useful cheat sheets available online. I recommend this one, because it outlines the steps to produce your own R-markdown document.
After viewing a demonstration, participants were then encouraged to create their own R-markdown document, and challenged to reproduce the following example:
The *mean* car speed was `r mean(cars$speed)`mph.
The mean car speed was 15.4mph.
We’re having a wondeRful time learning #knitR with @jpablofranco at @ResPlat pic.twitter.com/SbksF32iQi
— Meirian (@MeirianLT)
May 30, 2018
Next, Pablo gave an example of how to create HTML files which contain R code. To demonstrate, he shared an example he had prepared.
Participants had hands-on experience on how to modify the document to generate three types of reports:
At the end of the exercise, we were able to create amazing html files that included interactive plots (using the plotly package) and even good-looking regression tables (using stargazer and pander packages). – Pablo Franco @jpablofranco
- Show everything on the script: A document that shows all your code and output. This is excellent for reproducibility, and great for sharing with collaborators.
- Show only results: A document that shows only the output from your analysis. Also great for sharing with collaborators.
- Show only the relevant parts: Choose what pieces of the code and which outputs to show. Not everyone is interested in the complete process. This functionality allows you to personalise exactly what you want to share.
LaTeX is a programming language for high-quality, beautiful typesetting. Tim showed us how we can also create beautiful documents in LaTeX which contain our R code.
One of the best parts of having a science degree is the way it empowers us in our mundane lives outside of academia. When a housemate needed clarification about how much they should pay each fortnight in rent and bills, I helped them out with a beautiful and reproducible expenditure summary using Sweave, version controlled with Git. Sweave smushes together R and LaTeX in the spirit of “literate programming” as espoused by Donald Knuth. – Timothy Rice @resnomicon
Tim’s budget is now hosted at notabug.org/cryptarch/budget.git
If you would like to learn more about typesetting your R code in LaTeX, I recommend the tutorial on ShareLaTeX.
@resnomicon from @ResPlat showed us how he saved some $$$ by keeping a budget in #Sweave and combining the powers of #rstats and LaTeX! ✌#candid pic.twitter.com/zlqm5OLvte
— Meirian (@MeirianLT)
May 30, 2018
Finally, we saw another impressive application of R documentation from David, who had created a bookdown project, which was itself an R tutorial!
Several R packages have been developed to take advantage of R’s RMarkdown functionality, and one of those is bookdown. bookdown lets you compile a series of RMarkdown files (with a couple of bookdown specific ones for formatting) into either a traditional book as a pdf or, as in my case, a website. Using materials I had developed for a beginner R workshop as an example, my presentation showed the general structure of a bookdown project, how to compile it into a book/website, and how to make use of GitHub’s gitbook functionality to host my course material online as a website free of charge. bookdown serves as an accessible middle ground between basic RMarkdown documents and full blown LaTeX documents, and with the ability to embed R code it becomes a viable option for writing up your thesis! – David Wilkinson
You can check out the result of David’s hard work online here. The corresponding bookdown project can also be viewed, on GitHub.
This blogpost was created by Meirian Lovelace-Tozer, who is a Research Community Co-ordinator and LaTeX trainer at Research Platforms Services @ResPlat.David also impressed us all with his #rstats tutorial, which he developed with the #bookdown package! Thanks for helping attendees with their first #knitr experience @ResPlat pic.twitter.com/vHcWTV3lyL
— Meirian (@MeirianLT)
May 30, 2018
We’ve started a new blog series ResChat @ ResPlat, chatting to members of our community about their research and tools they use to work smarter not harder.
Our first interviewee is Wendy. Wendy is in her first year of her PhD here at the University of Melbourne.

What is your PhD researching?
I am looking at climate change and how it is affecting the phenology of grape vines & composition of grapes.
This research will hopefully help with short term planning for yearly vineyard operations and long term planning for where vines might be planted and which varieties to plant given the changes in weather and climate we are experiencing and will continue to experience.
How did you choose this research topic?
Formerly I worked in the Wine production industry in Victoria. Being part of that industry, we did see a lot of changes in phenology timing, such as earlier flowering and harvest and I was interested in exploring that more.
What tool do use in your research?
I’ve been using MATLAB. Firstly, I’ve learnt the basics especially in formatting and structuring my data. Now, I’m slowly learning to use the modelling components such as graphs to examine the data and try to find trends. Later, I’m hoping to expand my skill set to include Machine learning to further explore the data.
Why did you choose MATLAB?
For me it started with the Research Bazaar. My supervisor highly recommended it, so I went. For me it was a massive eye opener as to all the various techniques for handling data that were available as well as finding out which one will work for me. I met Doruk and he provided an overview of the Matlab tool, I attended his session and then came to the trainings.
I was looking for tools that had intuitive ways to crunch the data & do it more efficiently. I don’t want to spend all my time doing unnecessary laborious & repetitive parts & instead focus on the analysis and thinking around the data.
Similarly in my research there is so much data to work through! For example, a vineyard manager observesand records when vines go through the stages of phenology, and the vineyard weather station records the weather. These recordings are taken every 15 mins every day of the year. I’ve got 18 years of data, can you imagine how much data that is! I then want to relate the weather data to the observed phenology stages.
Yes, I could use a spreadsheet, but MATLAB is just so much more efficient & effective. I can work with and across several large datasets, perform calculations and create graphs I need, for example matching phenology dates & weather data and eventually build models to predict future phenology changes It has reduced my workload by a factor of six at least.
Would you recommend our services?
At first I was nearly in tears. Now I’m excited! I can’t imagine how I would’ve managed without Resplat and would certainly recommend!
You do first feel so overwhelmed, but Research Community Coordinator Doruk was so great at teaching & very patient, taking it in steps, it made everything easy.
I’ve learnt so much from the training and it’s great to have meetups where you can get help with your own problems. The community atmosphere is fantastic also, giving encouragement and meeting other people who you can also workshop your problems with you.
Thank you ResPlat!
*****
Want to share your research story? Ping us on Twitter @ResPlat. We would love to hear from you!
Want to sign up for training & events? See our latest calendar here: resplat.eventbrite.com.au