We’re hiring consultancy roles at Research Platform Services!
These roles are responsible for helping lead the team in new key service initiatives which will improve the capabilities by graduate researchers at the University of Melbourne to conduct world class research with cutting edge digital research skills.
Click here to view the duty statement, which includes further information + how to apply.
Applications close: 8th July 2019 at 5pm.

What is your PhD researching?
I am researching how design elements of the pokies may contribute to harmful gambling behaviour. Broadly, I also have an interest in how cognitive and behavioural sciences can inform public policy to improve quality of life or reduce harm.
For the most part my PhD focusses on a single aspect of Pokie machine design that researchers have called ‘losses disguised as wins’. These events occur when a machine returns a small payout that is less than the original bet. Financially speaking, these events are actually a loss, but the machines celebrate these events just like wins (i.e., with sounds and lights). So it’s likely that these events might make gambling feel more rewarding than it is financially.
So we’re looking at that in a few different ways. Firstly we are running an EEG study, where we record a signal that can be used to index the brain’s response to positive and negative outcomes. We’re analysing whether we’re processing these losses more like wins rather than like losses. We’re also doing an eye tracking study, where we teach some players spot whether a win is genuine or a loss disguised as win. We can then measure their eye-movements to check whether they engage with the visual feedback on the machine to check that they are doing the calculation necessary to tell the difference. When we make small mental calculations the pupil dilates, so we can also use pupil size to check that a calculation is taking place.
The last component of my PhD is a social attitudes survey. We want to know what happens when we tell the general public about about these particular pokies features, whether that changes their disposition towards harm minimisation policy in the sector, or whether it increases or decreases stigma towards people who engage in pokies gambling.
Can you tell me more about the tools you are using in your research?
I have been putting my survey together using Qualtrics, which is an easy web based survey design platform. I plan to do all my analyses using R & R Studio, specifically the Tidy Verse package, which I’ve been learning with ResPlat. After getting involved with ResPlat I’m pretty comfortable cleaning data and using Tidy Verse in R to set everything up nicely, but I haven’t done any of my analyses yet because I’m still collecting data!
For the EEG data,. there’s an open source MATLAB package called EEG Lab and ERP Lab which is used to analyse EEG data. I’ll also be using Psych toolbox, which is another toolbox written for MATLAB to present stimuli to participants, so I’m rapidly trying to learn MATLAB this year!!

Would you recommend our services to other Graduate Researchers? Absolutely, I think that the ResPlat service is great. There are two things that you guys do really well. The first is, I think, often, at least the first challenge when you want to learn a new tool, is you don’t know where to start. In response to that your introductory courses are great as they can get you up to speed with the basic operations & fundamentals of the tool for research. If you were to try to teach yourself that alone on the internet, although resources are getting better, you don’t know what to search for and you don’t know what’s important to learn first. With the courses you put training wheels on and it allows you to gain all the basic knowledge you need to start answering your own questions.
Another thing you guys do really well is build a community of researchers. I like learning socially and I find that a good test of your knowledge is to try explain it to someone else and help them troubleshoot. Almost immediately after finishing the Introduction to R workshop, I started coming along and helping out. Through this you become connected with other researchers who are learning, so you can problem solve together, which also helps solidify a lot of the knowledge and that builds the research community too.
Dan Myles is a 2nd year PhD candidate at Monash University, which includes a supervisor from Melbourne School of Psychological Sciences at the University of Melbourne. He also works as a Research Assistant at the Decision Science Hub at the University of Melbourne.
You can sign up for free digital research skill training here.
Check out our training catalogue here.
The Melbourne TrACEES Platform (Trace Analysis for Chemical, Earth and Environmental Sciences) provides high quality analysis in relation to surface, chemicals, trace elements, speciation analysis and associated structural study services. The Platform consolidates substantial instrumentation and multidisciplinary expertise, supporting researchers in areas of chemical, materials, environmental, agricultural and life sciences.

Research students who access TrACEES platform and require advanced computing skills
will benefit from the student-led training programs offered by Research Platform Services.
For example, these programs may cover trainings on data management such as systematic naming, data repository, organisation and so on.
TrACEES Platform is also keen to connect its analytical activities with our community of many talented research students and entrepreneurs. Through ResPlat, researchers who have strong computing skills can connect those in physical sciences and work on great ideas, or vice versa. There is a huge potential to develop modern and innovative chemical detection devices that require multidisciplinary expertise including electronics, software and chemicals. The development of open source management systems that help laboratories organise their assets and inventories is another good example.
More about the TrACEES Platform
The Platform facilitates coordinated access to three nodes:
Capability and Techniques
TrACEES offers the below services
For more information, please visit our website: https://chemicalanalysis.unimelb.edu.au/ General enquiry: tracees-enquiries@unimelb.edu.au
Contact: Dr Alex Duan (Platform Manager)
Helping researchers and building communities..

Last year, Research Platform Services helped over 1200 researchers from across the University develop a new skill, improve their understanding in a digital tool, build a relationship with a fellow researcher tackling a similar problem, and feel like they were part of a community.
We provide a range of services to support researchers such as Cloud, High Performance Computing and Data Storage – but helping you develop a new research skill, enabling you to build upon and improve a digital skill set that will unlock knowledge and a deeper understanding of data – now that is truly amazing.
Perhaps the greatest thing about our training program, is that largely it is delivered by researchers, for researchers - fellow travelers on the journey who truly understand exactly what pain you are feeling. They may have been down the same road not so long before you…
It’s a great model of delivery – and a tribute to Research Community Manager David Flanders and the team that he has built; who are passionate about training, building communities and creating pathways for success.
Our training program has many faces, whether it be ResBaz, a Soirée or an R training session – they are all designed to support you on your amazing journey of discovery. Just let us know how we can support you better.
We share with you the highlights of the evening and compile for you a list of all the digital tools that were suggested to help you on your quest to open science.
Our Soirée is underway starting with a panel on Open Science! We’ve still got games, community awards and much more. Come by to colab - ERC level 3! #resplat #unimelb pic.twitter.com/MoTBCvaYi4
— Research Platform Services (@ResPlat)April 12, 2019
The panelists (@smwindecker, @lingtax (from @MelbOpenRes) and @GeoGarber) delivered an insightful discussion on how researchers experience open science and how digital tools can contribute to open research.

@jpablofranco opened the panel discussion by citing a wikipedia-definition of open science: “Open science is transparent and accessible knowledge that is shared and developed through collaborative networks”.
With that the panel moved swiftly into the the future of research and the exciting expectation that open science practices would be, one day, the standard in research.
@ResPlat panelists and guests from @MelbOpenRes. pic.twitter.com/mQjGAeJUhs
— Dr Tyne Daile Sumner (@tynedaile)April 12, 2019
The expectation of change came from different arguments:
Dr Mathew Ling: “We do all this nice work from tax payers and then those same tax payers (as patients) can’t see the results of the research!”
Saras Windecker: "You can save time by using open tools because it is easier to revise your research analysis pipelines"
Panelists pointed out the existence of idiosyncrasies in each discipline and how different challenges would be faced by each field.
Jon Garber: “the problem is that my data isn’t my own (it’s owned by several other stakeholders), so being “open” is not as easy as it sounds"
However, change is happening, and with the help of graduate researchers, change will happen!

Thanks to all the panelists and participants who made of this evening a great success! And special thanks to @MelbOpenRes for their collaboration on this event.
Thanks for coming along! A robust panel it was. https://t.co/8He5fSV5Kz
— Dr Tyne Daile Sumner (@tynedaile)April 12, 2019
Recently Research Platforms Services ran a Hackathon for medical device innovation at the University of Melbourne. Postgraduate students from the Engineering faculty, and the Medicine, Dentistry, and Health Sciences (MDHS) came together in this cross-disciplinary collaboration to create and develop a medical device product over the course of 4 weeks. It aimed to encourage exciting conversations between faculties that normally do not interact with one another, to create and inspire new ideas. And we did exactly that!
To kick off our Hackathon we had the pleasure and honour of hearing from keynote speakers present on a variety of topics ranging from successes and failures of medical devices by Mr Jason Chuen, to medical device start-ups and innovation with the Australian climate by Geoff Ayre. We also had the wonderful opportunity to hear from Jon, Sophie and Jenny of Team Soleguard from the previous iteration of this hackathon.
So great to have @GeoffAyre come to speak to our participants about his journey with @umpshealth. Lots of great stories and lessons in his presentation at the first #hackathon session tonight @ResPlat.#3dprinting pic.twitter.com/llVeJPZjPE
— Eric Jong (@JongEric)March 28, 2019
These presentations helped contextualise the importance of collaboration between disciplines, as well as highlighting key considerations in product development and pointing out common pitfalls that many medical device products run into. They also provided a great source of entertainment and motivation for the attendees!
Thank you Jason @ozvascdoc for inspiring us @ResPlat to #innovate for patient care through collaboration! 3Device is excited to have you back soon as a panel judge @unimelb @JongEric pic.twitter.com/WIMEwcdrqM
— Gordon Chen (@GHJChen)March 28, 2019
Following the presentations, we needed to do some quick speed-dating to get acquainted and form teams for the next few weeks. In total we had 5 teams with each team having at least one clinical student, and one engineering student.
This diversity in knowledge was instrumental in identifying problems to address in the current medical space, in addition to having the skill-set required in how to devise solutions them. Many excellent conversations were had, with participants staying well past the proposed finish time to keep discussing their ideas.
In the second week of 3Devices, everyone got more technical with their ideas and concepts.
We had an introductory session with our Research Community Coordinator Eric Jong using TinkerCAD, the free and easy to use 3D design tool.
Gordon Chen, another of our Research Community Coordinators took us through using 3d Slicer, a free cross-platform open-source medical image processing and visualization system.
This was followed by a crash course in rapid prototyping using additive manufacturing with JD Hohmann from NExTLAB, the fabrication group from the Melbourne School of Design, who joined us for a talk about best practice for 3d printing outputs. JD also announced a special prize for all participants, a printing voucher for the 3d Printers in the NexTLAB for each team to use during their prototyping phase and also a generous giftcard for the winning team!
With the product development phase well on its way, the third night focused on how to make and deliver a pitch presentation. We were joined by Jeremy Kraybill from the Melbourne Accelerator Program who showed us the basics of pitching.
Over the course of an hour, the teams learned different pitch formats and had the opportunity to practise their newly acquired skills in preparation for the fourth and final week - Pitch Night!
Stay tuned for the next post for more information on what products the teams created, and which one won!
Looking forward to seeing the ideas and prototypes from the @ResPlat #3Devices #Hackathon #PitchNight. #3dMed #MedTech. Thx to our organisers! pic.twitter.com/Jqq1k4LePA
— Jason Chuen (@ozvascdoc)April 18, 2019
Are you interested in learning 3D printing, or want to get involved in our next Hackathon? Please get in contact with me on Twitter: @JongEric
This blogpost was created by Eric Jong, who is a Research Community Co-ordinator and TinkerCAD trainer at Research Platforms Services @ResPlat.
On the 11th of April 2019 we held a meet-up on templates in LaTeX. The purpose of this event was not only to showcase some exemplary LaTeX templates, but also to show attendees how to use them. Keep reading to access the templates showcased in the meet-up, and to learn how to use them!
\documentclass{}
This will determine the overall structure and layout of your document.
% comments
Comments begin with a percentage sign. They do not run commands nor display text in the output. Comments are used in templates to explain things and to give the template user options.
\begin{document}
This line signifies the start of the document content. Everything before this line is called the preamble, and forms the settings of your document.
Overleaf has a comprehensive guide for how to open and get started with a template: https://www.overleaf.com/learn/how-to/Creating_a_project_from_a_template
We showed an example starting with this presentation template: www.overleaf.com/latex/templates/presentation-template/ycwnkvzxyzwv

Following What to look for in a template above, we began by learning that this used a beamer document class. You can learn more about beamer documents on Overleaf.
We also played around with the different options for the presentation template appearance, which were initially listed as comments in the preamble. You can view some of the different options for beamer documents in the Beamer Theme Matrix.
If you are submitting a paper to a journal or conference, then you may be required to use a specific template. Otherwise, there is a wealth of templates available online. Here are some links to templates which may suit your needs:
Sorry Saras, we do not offer a thesis template.
When formatting your thesis, you must follow the 𝘗𝘳𝘦𝘱𝘢𝘳𝘢𝘵𝘪𝘰𝘯 𝘰𝘧 𝘎𝘳𝘢𝘥𝘶𝘢𝘵𝘦 𝘙𝘦𝘴𝘦𝘢𝘳𝘤𝘩 𝘛𝘩𝘦𝘴𝘦𝘴 𝘙𝘶𝘭𝘦𝘴: https://t.co/CvnUsbEi6f
I recommend you check out the templates available on @Overleaf!— Meirian (@MeirianLT)August 2, 2018
A couple of PhDs from the University of Melbourne have published their own thesis templates.
I just posted in Github the Latex source code for a blank thesis for @unimelb : https://t.co/WksZlIi4as
— Simone Romano (@ialuronico)January 14, 2016
Also check out this thesis template created by Joshua Ellis: https://github.com/JP-Ellis/LaTeX
LaTeX is free and open source, so anyone can create their own LaTeX template!
It’s not as hard as you think. All you need is a working LaTeX document. To publish your document as a template on Overleaf, simply follow these instructions: www.overleaf.com/learn/how-to/Can_I _create_my_own_LaTeX_templates?
Whether you’re just getting started with LaTeX, or thinking of publishing your own LaTeX template, I may be able to help! Please get in contact with me on Twitter: @MeirianLT
This blogpost was created by Meirian Lovelace-Tozer, who is a Research Community Co-ordinator and LaTeX trainer at Research Platforms Services @ResPlat.
Hello! My name is Alex and I’ll be taking over the reins for the Omeka training and community at Research Platform Services.

I am currently an honours student in the School of Culture and Communications exploring the relationship between paranoia and the culture wars in Australian media discourses. I’m also about to start working as a Digital Studio intern exploring children’s stories written by children.

Previously I have worked as a Students@Work intern using Omeka to bring to life archival materials about Sir Redmond Barry – the first chancellor of the University of Melbourne – as well as to develop research data management skills. Throughout my time as an undergrad at the university I have also gained experience with web scraping and social network analysis tools.I have also presented a ResPitch at the 2019 Research Bazaar as part of the ResGrants program.

I’m looking forward to building the Omeka community at the university and running engaging training programs and meetups!
Feel free to get in touch with me at alex.shermon@unimelb.edu.au.
You can find out more about Omeka and future training’s here
Jonathan Garber, Python Research Community Coordinator, Research Platforms Services
They have doubled! I have typically run our introduction course as a 2 part evening class. Last year people would sign up for both nights, but now, you can pick and choose which session to attend. This might be handy for you if you have a little bit of coding experience, and you may find the first session a little slow. The purpose of this blog is to help you folks make the right selections on this years Python Menu:

Just like any decent menu, our python menu has chili peppers to tell you how spicy each training is. Spiciness here means “level of prior knowledge required.” Here is what they mean:

Who should try 1 pepper? This class is geared for people who have never programmed before, and we take it nice and slow, however if you have programmed before and don’t mind rehashing the basics, then you will enjoy this class as well

Who should try 2 peppers? Students from Part 1 who are keen to learn how to use python, and students with some experience in python or another programming language who can work through the Part 1 Material on their own

Who should try 3 peppers? students who have a decent understanding of the introductory material
Introduction to Python Part 1: (materials)

This is the first part of our Python curriculum. In 4 hours you will be gently introduced to Python as a programming language, and Jupyter notebooks as an interpreter to use this language. You will learn how to create the basic types of data, run some functions on that data, and store the data in variables, lists, and dictionaries. You will even get to see a live representation of these data storage objects. Note If you attend Part 1, you are guaranteed a spot in Part 2.
Introduction to Python Part 2:(materials)

This session will build the lessons from Part 1. We will learn how to use logic to let the computer decide if it will run some code using if statements. We then learn about iteration using loops, where we get python to repeat lines of code on different pieces of data. We finish by looking at data input, and some python packages for numerical and tabular data analysis.
Intermediate Python: Functions, advanced iteration, and error handling

This essentially Session 3 where we will learn more about creating loops, package sets of code into our own functions, and learn some tricks for error handling.
Coding in python is a lot like eating tapas, where you will bring in multiple packages (dishes) and mix them together into a delicious script!! Man I am getting hungry…
An Introduction to Numpy and Scipy (materials)

In three hours, you will be introduced to Python’s answer to Matlab, Numpy and Scipy. We will focus on how to use the Numpy Array, which is the best way to organize numerical data and then we will learn how to index, slice, filter, and rearrange dimensions and use functions to do the maths!
An Introduction to Pandas materials

Pandas is Python’s package for handling tabular “tidy” data. In a couple lines of code, it can calculate and present heaps of statistics for you, as well as create formatted graphs. It is one of the most popular packages for doing data science in Python. In our workshop, we will use it to answer the age old question, does Melbourne always have 4 seasons in a day?
An introduction to Matplotlib (materials)

Matplotlib is the godparent of most python plotting packages, and is probably the most customizable graph making library in the entire python ecosystem, let alone the data science realm. Get ready for some data viz arts and crafts!
Who should go? students who have a decent understanding of the [introductory material] and are keen to make interesting customizable graphs in python
These are bespoke meetups where our extra special resleads will share one of their favorite Python features
Webscraping meetup with Beautiful Soup

Come learn how to scrape all of teh text and pictures you could ever want off of static websites using Beautiful Soup, Python'ss Premier html parser!
Then send me an email at jgarber@student.unimelb.edu.au and we can organize a meetup learning yet another way how to python!
Do you love science and research? Are you the one all your friends/colleagues come to for their typesetting solutions? Can you explain technology eloquently and kindly to the grumpiest professor?
If you answered YES to any/all of the above, then Research Platform Services invites you to apply for one of our many Junior Research Community Co-ordinator positions available! The successful applicant will grow their respective communities through regular workshops and meetups. Maintaining an online presence is also an essential part of the job. You would also be expected and supported in organising events within your communities, as well as Research Platforms-wide events such as the famous Research Bazaar conference. You can find out more about the Research Bazaar community in the first pages of our new publication: The Digital Research Skills Cookbook. Please see the below links for a detailed position description for each role, including how to apply.
Get in quick, applications close COB 28th February.
We’re hiring for:
We’re excited to announce that registrations are now open for the ResBaz Conference at the University of Melbourne! Register here:resbaz.edu.au.
ResBaz will be held on February 20th-21st 2019 at Wilson Hall.
Learn all about the digital tools you need to work smarter not harder in your graduate research, while also meeting your fellow researchers. There will be amazing food trucks, swag plus fantastic speakers too.
Just like previous conferences, submit a digital research toolbox poster and receive a free lunch Voucher! The top 10 will win a $20 Coles Myer Gift Card, while the top voted poster wins a $500 Flight Voucher!
Want to get a feel for what this conference is all about? Watch our promo video below:
Have more questions? Check out our FAQ here.
To register head to resbaz.edu.au.

Professor Ian Gordon, Director, Statistical Consulting Centre and Melbourne Statistical Consulting Platform.
This is just a small sample of questions researchers across the University of Melbourne are pursuing by collecting quantitative data.
The Melbourne Statistical Consulting Platform provides statistical support to this research community. Our consultants work collaboratively with graduate research students and staff across all and any stages of the research cycle – from planning, data collection and management, to analysis, interpretation and reporting. Consultants have experience working with researchers from all faculties across the University, including those that have a strong tradition of quantitative research and those adopting new, novel and innovative quantitative research approaches in traditionally non-quantitative fields. We handle data of all sizes – from small specialised experiments in animal science to big data.
Investment in quality research planning and data collection must be matched in analysis and reporting. The practical, applied focus of the Platform services supports this goal.
In second semester, the Melbourne Statistical Consulting Platform will be running a free half-day workshop on producing quality graphs, using statistical software freely available to University of Melbourne staff and students.
The Platform supports graduate researchers doing a research higher degree (mainly PhD, some Masters) and University staff members.
You can find out more about the Platform here.
Dr Stephen Giugni, Associate Director, Research Platform Services.
Welcome to our Spring Newsletter!
Our focus for this issue is on increasing the efficiency of research publication via compute, community and consultancy. If I can use a little bit of executive license, our focus is about supporting you to reduce the time to research outputs and their benefits.
The ability to move, process, analyse, engage with, share, compare, interpret and explore research data is facilitated by computational environments. Whether it is a large High Performance machine, the cloud, or your laptop – the ability to compute on data is fundamental to research, be it in linguistics and sentiment analysis through to genomics, complex simulation and modelling applications and artificial intelligence.
But raw computation is only part of the story. A key role that we provide to the research community is consultancy, endeavouring to understand your requirements and providing advice regarding the most appropriate environment or platform to support you, or to determine if we can develop something tailored for your needs, that assists in building community, enhancing interaction or accelerating outputs.
Hopefully, through the stories in this issue, you will see how we have been able to support a number of research activities – and perhaps how we might be able to work with you!
This year the University of Melbourne in partnership with Deakin University, La Trobe University, RMIT and St Vincents Hospital launched a brand-new General-Purpose Graphics Processing Unit (GPGPU) cluster as part of a new high-end compute service hosted by the University of Melbourne.
Funded through a combination of ARC-LIEF, the University of Melbourne and our University partners, it is operated on behalf of the partnership by University departments of Melbourne School of Engineering (MSE), Melbourne Bioinformatics and Research Platform Services, the service came online in July. Additionally, MSE contributed departmental funds to augment the service. The cluster consists of 72 nodes, each with four NVIDIA P100 graphics cards, which can provide a theoretical maximum of around 900 teraflops.

GPGPUs are a valuable resource for computational science as each GPGPU chip contains thousands of cores that are optimised for certain kinds of tasks. For example, if the four core CPU in a typical laptop computer were able to compute the flow of particles through a blood ventricle in a certain time, a GPGPU chip might be able to do the same task tens or hundreds of times faster. This is an example of Computational Fluid Dynamics (CFD), a type of process that is very well suited to the many cores in a GPGPU. Another research domain well suited to GPGPUs is Molecular Dynamics (MD), where the configurations and interactions of complex molecules and molecular chains are simulated in the GPGPU.
Perhaps the most rapidly expanding area of research to take advantage of GPGPUs is Deep Learning. Deep learning is a subfield of Machine Learning, where a computer is trained to recognise and identify patterns in sets of data. For example, a computer can be trained to recognise a cat that it has never seen, by looking at many pictures of cats. The more pictures of cats used for the training process, the better chance of the machine to identify a new cat. This becomes very important when you consider that self-driving cars will need to be able to identify not only cats, but all sorts of things in all sorts of configurations, in real-time.

The new GPGPU service has seen a rapid take up, reaching full capacity (usage >95%) within six weeks of launch. Coupled with high-performance storage, the service is already supporting almost 100 research groups across the five partners and has processed over 100,000 jobs.
Research Platform Services has started running training to assist researchers to prepare their jobs for the GPGPU environment, with more courses including GPGPU programming planned for 2019.
While not every computing workload is well suited to GPGPUs, more and more applications are including modules specifically for GPGPU (Matlab, anyone?). If you think you might have a computation challenge that might benefit from a GPGPU, or would just like to know more about them, please email hpc-support@unimelb.edu.au and we will be in touch.
Lev Lafayette (Research Platform Services).
Many contemporary researchers are often confronted with significant computational problems. Often their datasets have grown beyond the capacity of their desktop systems to solve, or the complexity of their computational tasks are too great. This becomes even more challenging when one realises that both the datasets and the complexity of computation is growing faster than improvements in desktop systems. All of a sudden many researchers discover that not only do they have their own domain specialisations, but they also need to have an increasingly high level of familiarity with information science.
It is at this point that many researchers may have to turn to high performance computing. For many it’s not an easy transition; they may be used to a different operating system, and a very different user interface. Many come to the environment with little, if any, experience with the Linux operating system let alone the command-line interface and batch-job submission. They might be surprised that forwarding of graphics-intensive applications comes with major latency issues, if it is available at all, ‘data management’ is something meaningful, rather than just a buzzword (buzz-phrase?), and the version of the software being used and even the compiler used to install it is suddenly important.

Aspirin ion as produced with molecular dynamic simulation software NAMD, viewed locally with molecular modelling software VMD.
All of this generates a steep learning curve, but the good news is that it’s worth it. At this point of one’s research activities one is working in a very advanced environment and the challenges and results are commensurate. The use of the command-line is no mere fancy - operating at the level of the system shell means that one is very close to the bare metal, rather than abstracted by software and user-interface layers. As a result, performance is critical. Knowledge acquired of the shell environment is not knowledge that goes away either; whilst incremental features have been added to the original shell from 1977, it is still fundamentally the same beast, and it will remains so for decades to come - for the rest of one’s research career and beyond.
All of this comes together with the batch job submission system. A HPC cluster is essentially a large number of commodity servers linked together which acts as one system, even if partitioned according to hardware (or even ownership), and shared between many users. With many users competing to use this shared resourced some sort of queuing system is required - hence a scheduler which receives data from a resource manager and allocates where and when jobs can run. It is because of this capability (in terms of interconnect) and capacity (in terms of processor cores) that users can run their complex or large dataset tasks. How else is one going to run a complex computational problem that requires dozens of tasks to communicate with other without a message passing interface (MPI) application across multiple compute nodes? How about running the same processing task over dozens of datasets at the same time, as with a job array? Unless you have access to a HPC cluster, this simply can’t be done efficiently or effectively.
Of course, HPC is not the solution to all research or computational tasks. Long-running single-threaded applications whose datasets are dependent on each are not always a good fit. Nevertheless it is perhaps unsurprising to discover that both the availability of HPC systems and HPC training correlates with research output. It is almost as if by having powerful computing resources and the knowledge of how to use them means that the data can be analysed faster and the interpretation by researchers can be conducted earlier. It is something that many of the top universities around the world have realised, and the University of Melbourne has certainly come on board with this realisation with major upgrades to the Spartan HPC system this year and with the ‘Petascale Campus’ plans. Most of all, the Research Platforms team will continue to provide the best assistance we possibly can to help researchers get their work done efficiently and effectively.