Editorial

This issue, starting Volume 16 of the Computer Science and Information Systems journal, consists of one invited and 13 regular articles. As always, we acknowledge the hard work and enthusiasm of our authors and reviewers, who were invaluable to producing the current issue.

The current issue begins with the invited paper “Cross-layer Design and Optimization Techniques in Wireless Multimedia Sensor Networks for Smart Cities” by Hasan Ali Khattak et al., which surveys the state-of-the-art in cross-layer optimization approaches for the structural design of wireless sensor networks, in order to solve issues such as energy conservation, variable channel capacity, and the resource-constrained nature of such networks. Techniques are evaluated and discussed through the prism of six criteria: energy efficiency, quality of service, communication reliability, security, error correction, and network resource management. Challenges and future directions are highlighted in detail, emphasizing the use of multimedia data and applications on the internet of things.

In the first regular article, “SOCA-DSEM: a Well-Structured SOCA Development Systems Engineering Methodology” Laura C. Rodriguez-Martinez et al., tackle the theoretical and practical deficiencies of service-oriented computing application (SOCA) development methodologies. The authors first argue for the use of agile approaches (more specifically agility-rigor methodologies), and then propose a new SOCA development systems engineering methodology (DSEM) giving its description, theoretical foundations, illustration of use on a prototype running example, and two pilot empirical evaluations on usability metrics.

“Building a flexible BPM Application in an SOA based Legacy Environment,” by Mladen Matejaš and Krešimir Fertalj proposes an integration model for building a business process management application (BPMA) and connecting the BPMA with legacy systems based on service-oriented architecture (SOA), with notable characteristics being simple co-dependence of BPMA and existing systems, minimal changes to legacy applications and maximal use of existing functionalities. Feasibility of the approach is demonstrated on a real-life business use case scenario.

Marko Janković et al., in “Reconstructing De-facto Software Development Methods,” present an approach through which companies can document and reconstruct the software development methods that they truly use. Contrary to existing approaches, the proposed method does not place great burden on company staff, but rather extracts information on development practice directly from software repositories. The approach was developed through studying five companies, and evaluated on a real software repository shared by an additional company, demonstrating its effectiveness on various aspects of the development process.

“CrocodileAgent 2018: Robust Agent-based Mechanisms for Power Trading in Competitive Environments,” authored by Demijan Grgic et al., addresses the problem of power brokerage in sustainable energy systems enabled by smart grids. The proposed agent-based approach fared very well (3rd overall) in the 2018 Power Trading Agent Competition, thanks to its focus on the creation of smart time-of-use tariffs to reduce peak-demand charges.

In their article entitled “Estimating Point-of-Interest Rating Based on Visitors Geospatial Behaviour,” Matej Senožetnik et al. deal with the problem of sparse ratings by extracting points-of-interest from various sources and proposing an approach to estimating point-of-interest ratings based on geospatial data of their visitors. Ratings are estimated using various machine-learning techniques (including support vector machines, linear regression, random forests and decision trees), and the approach is experimentally evaluated on the problem of motorhome users visiting campsites in European countries.

“A Study of Sampling with Imbalanced Data Sets – Case of Credit Risk Assessment” by Kristina Andrić et al. investigates the role of sample size and class distribution in credit risk assessments, performing a large-scale experimental evaluation of real-life data sets of different characteristics, using several classification algorithms and performance measures. Results indicate performance measure, classification algorithm and data set characteristics all play a role in determining the optimal class distribution, offering insight on how to design the training sample and select the classification algorithm to maximize prediction performance.

The article “Towards Understandable Personalized Recommendations: Hybrid Explanations,” by Martin Svrcek et al. addresses the problem of trust in recommender systems, i.e. recommendations, through proposing an approach for making recommendations transparent and understandable to users. This is achieved with a hybrid method of personalized explanation of recommendations, independent of recommendation technique, which combines basic explanation styles to provide the appropriate type of personalized explanation to each user. Experimental evaluation in the news domain demonstrated not only improvements to users’ attitudes towards recommendations, but also to recommendation precision.

George Lagogiannis et al. in their article “On the Randomness that Generates Biased Samples: The Limited Randomness Approach,” tackle the problem of generating exponentially biased samples from data streams by proposing an algorithm using O(1) random bits per stream element, as opposed to existing algorithms and the theoretical limit of O(n), where n is the sample size. This is achieved in a specific setting where survival probabilities are assigned to the stream elements before they start to arrive.

In “Retinal Blood Vessel Segmentation Based on Heuristic Image Analysis,” Maja Braović et al. improve automatic analysis of retinal fundus images for disease diagnosis by proposing an approach for retinal blood vessel segmentation that is simpler than existing approaches (since it is based on a cascade of very simple image processing methods). The method emphasizes specificity over sensitivity, and is demonstrated to achieve high average accuracy on two publicly available data sets.

“Data Visualization Techniques in Medical Image Compression Evaluation – An Empirical Study,” by Dinu Dragan et al. presents an empirical study of multidimensional visualization techniques, considering tables (control), scatterplots, parallel coordinates, and star plots in the problem of decision making in picture archiving and communications system (PACS) design. In the experiments, using a specially developed tool three sets of subjects were presented with visualizations, recording their decisions in order to determine which visualization technique produces the best decisions. Conclusions include visualizations producing better results than tables, and 2D parallel coordinates being best on high-dimensional data.

Francisco Jurado and Pilar Rodriguez, in “An Experience on Automatically Building Lexicons for Affective Computing in Multiple Target Languages,” tackle the problem of lexicon building for affective computing on texts written in languages other than English. The presented approach starts with initial seeds of English words in that define the emotions of interest, and then expands these with related words in a bootstrapping process, finally obtaining a lexicon by processing the context sentences from parallel translated text where the terms have been used. Exploratory analysis shows promising results, with similar affective fingerprints observed in different translations of the same books.

In their article “Improving Sentiment Analysis for Twitter Data by Handling Negation Rules,” Adela Ljajić and Ulfeta Marovac explore how the explicit treatment of negation by using grammatical rules that influence the change of polarity impacts the effectiveness of sentiment analysis of tweets in the Serbian language. Experimental evaluation showed statistically significant improvement of sentiment analysis when negation was processed using the proposed rules with the lexicon-based approach or machine learning methods.

Finally, “Rejecting the Death of Passwords: Advice for the Future,” by Leon Bošnjak and Boštjan Brumen discusses the problems of using textual passwords for authentication, as well as their alternatives. An interesting historical perspective is given, showing that real user-generated passwords used today are no better than the ones used four decades ago. The article gives recommendations how to improve password security, arguments against replacements of textual passwords, and discusses conditions that need to be met in order for passwords to finally be replaced.

Editor-in-Chief
Mirjana Ivanović

Managing Editor
Miloš Radovanović