Planet Primates

October 26, 2014

Planet Clojure

A guide to setup test driven workflow for Clojure

Just a heads up, while this post assumes using Emacs, the takeaway should be the workflow which is fairly portable to other editors.

Clojure test workflow in Emacs

When developers practising test-driven development1 in other languages first move to Clojure, one thing that becomes painfully apparent is the slow startup time of a Clojure process. While JVM is partly to blame for this, it is Clojure itself that takes a significant amount of time to boot up.2

Clojure community is working on reducing the startup time, however it still is noticeably slow especially if one is coming from a dynamic languages such as Ruby, Python, JavaScript etc. This post presents a workflow where one can have an almost instantaneous feedback on running tests, from right within the editor without reloading the clojure process over and over.

Intended audience

What follows is my take on what a productive TDD workflow could be in Clojure. This article expects the audience to be well versed in their respective editors and have launched/played with a Clojure REPL before. It also gets smoother to follow if one has already used Stuart Sierra’s reloaded3 workflow. While the article will make use Emacs as an editor, the workflow is what matters and should be fairly portable with a configurable editor.

Please take a note that your preferred editor should be able to talk to a running clojure process via nREPL as well be able to display the results from it if you want similar results.

Preparing the Clojure project

First things first, the Clojure project has to be prepared. The first step is to use an nREPL middleware for Clojure. Afterwards, we will make use of reloaded workflow.

Creating a new reloadable project

There is an existing leiningen template to generate new projects based on this workflow. If you are starting from scratch, all you really need to execute is a single lein command.

lein new reloaded com.myname/awesome-project

Transforming into a reloadable project

This is probably the trickiest part, especially if you already have an existing Clojure project that you would want to restructure. If you are starting from scratch, you can skip this section.

This is entirely based on Stuart’s reloaded workflow. Since this is major workhorse of the whole testing without restarting approach, it would be recommended to read the well detailed post describing the workflow before continuing. Behind the scenes, the workflow makes use of tools.namespace4

The main idea of this workflow is to create a namespace which provides a system constructor. This constructor function should be able to return a new instance of the whole application/system5.

Some more functions are present in the namespace which manage the starting and stopping of the application, convienently called start and stop. Finally, the workflow makes use of the fact that Clojure will automatically load file named user.clj when starting a REPL in the given environment (because Clojure REPL starts by default in the user namespace, but this is configurable.)

A user.clj file is added to the dev folder, which provides some more functions that initialize, start and stop system. Convienently, (go) and (reset) functions are provided that wrap around all the other ones. They respectively start and reset(reload all the changed/new files etc.) the process properly to have a clean REPL to work on.

Since, this depends on the complexity and design of each individual project, it is recommended to follow the above mentioned post to properly integrate in one’s already existing application.

Preparing for nREPL

To enable/enhance the Clojure project to allow clients to talk it via nREPL we will add a plugin called cider-nrepl6 to the project.clj.

CIDER is an Emacs package and cider-nrepl as a clojure project plugin enhances it’s capabilities, but the plugin is well consumed by other editors too. (Eg. vim-fireplace7 makes use of cider-nrepl whenever available.)

Add the plugin to the project.clj.

:plugins [...
          [cider/cider-nrepl "0.7.0"]
          ...]

Please visit the project page and make sure you’re using the latest version at the time of reading.

Trying it out

Now that is project has been set up, let’s make sure it’s working normally. As expected, we would start a normal REPL, and test that the (go) and (reset) are working properly.

After you first launch the REPL via leiningen,

lein repl

run the functions provided by the reloaded workflow

user => (go)
:ready
user=> (reset)
:reloading (.... a list of all your files in the project ...)

Running (reset) should reload all your files in the project and give you a clean slate to work with in the running process. This is the magic that the test workflow further ahead makes use of.

Tuning the editor

The following section will have instructions for Emacs, but it should be applicable to any fairly configurable editor.

Talking over nREPL

As mentioned earlier, CIDER is the package used by Emacs to talk to the clojure project via nREPL. Install the package into Emacs the way you prefer it.

Please make sure that the cider-nrepl plugin for the clojure project and the cider package for Emacs are compatible with each other. This can be checked in the respective projects pages. As of the writing, the release version numbers are synchronised between the projects.

Executing tests on the nREPL

After installing the package/plugin, ensure that it’s loaded and open the clojure project that you want to work on. Load up the test file and fire off a REPL via editor.

After connecting to the REPL one can execute tests directly on it, change the files in the editor, reload them in the REPL using (reset) and rerun the tests.

This can be done in Emacs as follows.

;; Fire off cider to connect to a REPL.
;; This will take more than just a few seconds.
M-x cider-jack-in<RET>
;; Also C-c M-j as provided by cider-nrepl

Once the REPL is ready, initialize the system and interact with it.

user => (go)
:ready
;; Now one can use test runner functions as provided by testing
libraries
;; for example clojure.test tests can be run as follows
user => (clojure.test/run-all-tests)
;; after this one can change the files that they are working on and
reset the REPL
user=> (reset)
:reloading (.... a list of all your files in the project ...)
;; now run the tests again / etc.

However, the above mentioned method is a poor wo/man’s way of running tests in the REPL. One can directly make use of the CIDER functionality to run/re-run all/selective tests.

This can be done in Emacs as follows.

;; Make sure the REPL is running and the project has started (go)
;; Open the test file you want to work on and execute the command
M-x cider-test-run-tests<RET>
;; also C-c ,
;; the above will run all the tests in the file and show results
;; either in the buffer (when failed) or in the echo line (if passed)

;; one can also selectively run tests
;; place the cursor on the test you want to run and execute
M-x cider-test-run-test<RET>
;; also C-c M-,

While executing the tests has gotten a bit faster, the problem still remains that the REPL has to be reloaded everytime something is changed in the code. The final section ahead will deal with this.

Lightning quick re-runs

omg, i haz an elisp :) ...... basically !

Let’s write some elisp to help us have the whole reload-and-run-test flow just a keypress away. Add the following to your relevant emacs script files.

This example only provides a keypress to reload all the tests in the namespace but you should be able to get the idea and extend it.

(defun cider-repl-command (cmd)
  "Execute commands on the cider repl"
  (cider-switch-to-repl-buffer)
  (goto-char (point-max))
  (insert cmd)
  (cider-repl-return)
  (cider-switch-to-last-clojure-buffer))

(defun cider-repl-reset ()
  "Assumes reloaded + tools.namespace is used to reload everything"
  (interactive)
  (save-some-buffers)
  (cider-repl-command "(user/reset)"))

(defun cider-reset-test-run-tests ()
  (interactive)
  (cider-repl-reset)
  (cider-test-run-tests))

(define-key cider-mode-map (kbd "C-c r") 'cider-repl-reset)
(define-key cider-mode-map (kbd "C-c .") 'cider-reset-test-run-tests)

Now, you should be able to keypress C-c . to reset and run all tests via cider as well just be able to reset the REPL separately via C-c r.

Conclusion

I wrote this guide, because I couldn’t find just one source of information for getting to a similar workflow presented above. I have pulled in ideas from a lot of other sources to be able to come up with this.

I hope this comes of some use to you as well. Feel free to pour in your suggestions and thoughts regarding improvement/corrections below.

Update: I realised a bit late that I missed the published date by a whole month, but I'll let the permalinks stay for now and not break any links directed here.

Footnotes

  1. Test-driven development here meaning, in the wide sense all styles of instant feedback testing when it comes to dynamic languages without a preference for test-first, test-driven, etc. The point, is being able to execute a test one is writing, without switching the ongoing context too much(eg. without leaving the editor).

  2. Solving Clojure Boot Time

  3. Stuart Sierra has explained his reloaded workflow in details here. He has also created a leiningen template to create new reloaded projects

  4. According to the tools.namespace github page, it includes tools for managing namespaces(generating a dependency graph / reloading etc.) in clojure.

  5. Called system, because it represents the whole system or the application that one is working on.

  6. cider-nrepl is a collection of nREPL middleware designed to enhance CIDER, as explained here.

  7. vim-fireplace is a Vim plugin for clojure that provides a repl within the editor.

by Suvash Thapaliya at October 26, 2014 11:00 PM

October 22, 2014

QuantOverflow

Relationship between Beta and Standard Deviation

I was doing some financial analysis on two firms in the coffee industry. After calculating Beta and Standard Deviation for both firms, I seem to have stumbled on some weird phenomenon.

It appears that firm A has a higher standard deviation than firm B, while also possessing a lower beta coefficient.

How is this possible? I had the impression that standard deviation and beta were both measures of risk / volatility, and a higher standard deviation would naturally lead to a higher beta.

Your help would be greatly appreciated. Thanks and have a nice day!

by James at October 22, 2014 06:57 AM

StackOverflow

Construct a sequence of time related names

We have the following sequence in our code:

val halfHourlyColumnNames = Seq("t0000", "t0030", "t0100", "t0130", "t0200", "t0230", "t0300", "t0330", "t0400", "t0430", "t0500", "t0530", "t0600", "t0630", "t0700", "t0730", "t0800", "t0830", "t0900", "t0930", "t1000", "t1030", "t1100", "t1130", "t1200", "t1230", "t1300", "t1330", "t1400", "t1430", "t1500", "t1530", "t1600", "t1630", "t1700", "t1730", "t1800", "t1830", "t1900", "t1930", "t2000", "t2030", "t2100", "t2130", "t2200", "t2230", "t2300", "t2330")

I would like to rewrite this in a much more concise way. What would be the shortest way to create the above sequence in Scala?

by joscas at October 22, 2014 06:56 AM

Scala: Why foldLeft can't work for an concat of two list?

Defining a concat function as below with foldRight can concat list correctly

def concat[T](xs: List[T], ys: List[T]): List[T] = (xs foldRight(ys))(_ :: _)

but doing so with foldLeft

def concat1[T](xs: List[T], ys: List[T]): List[T] = (xs foldLeft(ys))(_ :: _)

results in an compilation error value :: is not a member of type parameter T, need help in understanding this difference

by Somasundaram Sekar at October 22, 2014 06:54 AM

How to write multiple statements in one if block in clojure?

I'm writing a function in clojure, that takes in 2 arguments (both are lists), and iterates over vehicles by recursion, until the vehicle list becomes empty. The function is like:

(defn v [vehicles locs]
    (if (= (count vehicles) 0)
        nil
        (if (> (count vehicles) 0)
            (split-at 1 locs)
            (v (rest vehicles) (rest locs))
        ))
    )

So, if I give the input as (v '(v1 v2 v3) '([1 2] [2 3] [4 2] [5 3])), then I want the output as [([1 2]) ([3 4]) ([5 6] [6 7])]. I know that the statement (v (rest vehicles) (rest locs)) is not executing because it's taking it in the else case i.e., when count(vehicles) not > 0. I want to know how can I make this statement be executed in the same if block, i.e., in (if (> (count vehicles) 0)

by Erica Maine at October 22, 2014 06:54 AM

Scala getting field and type of field of a case class

So I'm trying to get the field and their types in a case class. At the moment I am doing it like so

typeOf[CaseClass].members.filter(!_.isMethod).foreach{
   x =>
     x.typeSignature match {
        case _:TypeOfFieldInCaseClass => do something
        case _:AnotherTypeOfFieldInCaseClass => do something
     }
}

the problem is x.typeSignature is of type reflect.runtime.universe.Type which cannot match on any of the types that reside in the case class. Is there some way to do this?

by Justin Juntang at October 22, 2014 06:41 AM

CompsciOverflow

Is $K' = \{ w \in \{0,1\}^* | M_w$ Halts on $w \}$, where $M_w$ is the TM whose encoding is $w$, equivalent to the halting problem?

My professor presented the halting problem as $K' = \{ w \in \{0, 1\}^* | M_w$ Halts on $w \}$, where $M_w$ is the TM whose encoding is $w$ (i.e. $w = \langle M \rangle$), and said it was equivalent to $K = \{ \langle M,v \rangle | M $ Halts on $v \}$.

I tried to think it through and this is my understanding so far:

$(1)$ If $w \in K'$ then we can define $f(w)= \langle M_w,w \rangle$ then $w \in K' \Leftrightarrow f(w) \in K$ so we have that $K'$ can be reduced to $K$

$(2)$ And for the other way around, we define $g(\langle M,v \rangle) = \langle M' \rangle$ where $M'$ is the TM that simulates the TM $M$ on input $v$ and halts iif $M$ halts on that input. So we get that both languages are reducible to each other thus equivalent.

But I'm not sure about part $(2)$, is the reasoning correct? Can someone shed some light on my problem?

by Zakaria Soliman at October 22, 2014 06:40 AM

StackOverflow

Scala sum of a binary tree tail recursive

So I have the following code that define a binary tree.

sealed abstract class BinTree {
  def sum = sumAcc(0)
  def sumAcc(acc: Int): Int
  def incl(x: Int): BinTree
}

case class NonEmpty(val elem: Int, val left: BinTree, val right: BinTree) extends BinTree {
  def sumAcc(acc: Int) = right.sumAcc(left.sumAcc(elem + acc))
  def incl(x: Int): BinTree =
    if (x < elem) new NonEmpty(elem, left incl x, right)
    else if (x > elem) new NonEmpty(elem, left, right incl x)
    else this
  override def toString = "{" + left + elem + right + "}"
}

case object Empty extends BinTree {
  def sumAcc(acc: Int) = acc
  def incl(x: Int): BinTree = new NonEmpty(x, Empty, Empty)
  override def toString = "."
}

val rootTree = NonEmpty(1, NonEmpty(2, NonEmpty(3, Empty, Empty), Empty), Empty)
rootTree.sum

Is the sum method tail recursive?. I suspect is not tail recursive because the calls to right.sumAcc have to wait to left.sumAcc(elem + acc) to terminate.

If its not tail recursive, How can I change it?

by edblancas at October 22, 2014 06:39 AM

AWS

New AWS Directory Service

Virtually every organization uses a directory service such as Active Directory to allow computers to join domains, list and authenticate users, and to locate and connect to printers, and other network services including SQL Server databases. A centralized directory reduces the amount of administrative work that must be done when an employee joins the organization, changes roles, or leaves.

With the advent of cloud-based services, an interesting challenge has arisen. By design, the directory is intended to be a central source of truth with regard to user identity. Administrators should not have to maintain one directory service for on-premises users and services, and a separate, parallel one for the cloud. Ideally, on-premises and cloud-based services could share and make use of a single, unified directory service.

Perhaps you want to run Microsoft Windows on EC2 or centrally control access to AWS applications such as Amazon WorkSpaces or Amazon Zocalo. Setting up and then running a directory can be a fairly ambitious undertaking once you take in to account the need to procure and run hardware, install, configure and patch the operating system, and the directory, and so forth. This might be overkill if you have a user base of modest size and just want to use the AWS applications and exercise centralized control over users and permissions.

The New AWS Directory Service
Today we are introducing the AWS Directory Service to address these challenges! This managed service provides two types of directories. You can connect to an existing on-premises directory or you can set up and run a new, Samba-based directory in the Cloud.

If your organization already has a directory, you can now make use of it from within the cloud using the AD Connector directory type. This is a gateway technology that serves as a cloud proxy to your existing directory, without the need for complex synchronization technology or federated sign-on. All communication between the AWS Cloud and your on-premises directory takes place over AWS Direct Connect or a secure VPN connection within a Amazon Virtual Private Cloud. The AD Connector is easy to set up (just a few parameters) and needs very little in the way of operational care and feeding. Once configured, your users can use their existing credentials (user name and password, with optional RADIUS authentication) to log in to WorkSpaces, Zocalo, EC2 instances running Microsoft Windows, and the AWS Management Console. The AD Connector is available in Small (up to 10,000 users, computers, groups, and other directory objects) and Large (up to 100,000 users, computers, groups, and other directory objects).

If you don't currently have a directory and don't want to be bothered with all of the care and feeding that's traditionally been required, you can quickly and easily provision and run a Samba-based directory in the cloud using the Simple AD directory type. This directory supports most of the common Active Directory features including joins to Windows domains, management of Group Policies, and single sign-on to directory- powered apps. EC2 instances that run Windows can join domains and can be administered en masse using Group Policies for consistency. Amazon WorkSpaces and Amazon Zocalo can make use of the directory. Developers and system administrators can use their directory credentials to sign in to the AWS Management Console in order to manage AWS resources such as EC2 instances or S3 buckets.

Getting Started
Regardless of the directory type that you choose, getting started is quick and easy. Keep in mind, of course, that you are setting up an important piece of infrastructure and choose your names and passwords accordingly. Let's walk through the process of setting up each type of directory.

I can create an AD Connector as a cloud-based proxy to an existing Active Directory running within my organization. I'll have to create a VPN connection from my Virtual Private Cloud to my on-premises network, making use of AWS Direct Connect if necessary. Then I will need to create an account with sufficient privileges to allow it handle lookup, authentication, and domain join requests. I'll also need the DNS name of the existing directory. With that information in hand, creating the AD Connector is a simple matter of filling in a form:

I also have to provide it within information about my VPC, including the subnets where I'd like the directory servers to be hosted:

The AD Connector will be up & running and ready to use within minutes!

Creating a Simple AD in the cloud is also very simple and straightforward. Again, I need to choose one of my VPCs and then pick a pair of subnets within it for my directory servers:

Again, the Simple AD will be up, running, and ready for use within minutes.

Managing Directories
Let's take a look at the management features that are available for the AD Connector and Simple AD. The Console shows me a list of all of my directories:

I can dive in to the details with a click. As you can see at the bottom of this screen, I can also create a public endpoint for my directory. This will allow it to be used for sign-in to AWS applications such as Zocalo and WorkSpaces, and to the AWS Management Console:

I can also configure the AWS applications and the Console to use the directory:

I can also create, restore, and manage snapshot backups of my Simple AD (backups are done automatically every 24 hours; I can also initiate a manual backup at any desired time):

Get Started Today
Both types of directory are available now and you can start creating and using them today in any AWS Region. Prices start at $0.05 per hour for Small directories of either type and $0.15 per hour for Large directories of either type in the US East (Northern Virginia) Region. See the AWS Directory Service page for pricing information in the other AWS Regions.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at October 22, 2014 06:33 AM

StackOverflow

Getting "msg: Failed to find required executable easy_install" when trying to bring up a vagrant instance with ansible

One man. One mission. Configure a Vagrant machine with Ansible for use as a Python development environment.

I have attempted to provision a Vagrant machine with Ansible.

I set up my directory structure for it according to the instructions outlined here: https://danielgroves.net/notebook/2014/05/development-environments/

Everything went swimmingly, as shown in the initial part of the response to "vagrant up":

$vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Importing base box 'ubuntu/trusty64'...
==> default: Matching MAC address for NAT networking...
==> default: Checking if box 'ubuntu/trusty64' is up to date...
==> default: Setting the name of the VM: wa2_default_1413954520562_41027
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Available bridged network interfaces:
1) wlan0
2) eth0
    default: What interface should the network bridge to? 1
==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
    default: Adapter 2: bridged
==> default: Forwarding ports...
    default: 8080 => 8080 (adapter 1)
    default: 22 => 2222 (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Connection timeout. Retrying...
    default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Configuring and enabling network interfaces...
==> default: Mounting shared folders...
    default: /vagrant => /home/useruser/proj/wa2
==> default: Running provisioner: ansible...
ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false PYTHONUNBUFFERED=1 ANSIBLE_SSH_ARGS='-o ForwardAgent=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --private-key=/home/useruser/.vagrant.d/insecure_private_key --user=vagrant --connection=ssh --limit='default' --inventory-file=/home/useruser/proj/wa2/.vagrant/provisioners/ansible/inventory -vvv provision/playbook.yml

PLAY [all] ******************************************************************** 

GATHERING FACTS *************************************************************** 
<127.0.0.1> ESTABLISH CONNECTION FOR USER: vagrant
<127.0.0.1> REMOTE_MODULE setup
<127.0.0.1> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ForwardAgent=yes', '-o', 'ControlMaster=auto', '-o , 'ControlPersist=60s', '-o', 'ControlPath=/home/useruser/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=2222', '-o', 'IdentityFile="/home/useruser/.vagrant.d/insecure_private_key"', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '127.0.0.1', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1413954587.93-167869388068052 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1413954587.93-167869388068052 && echo $HOME/.ansible/tmp/ansible-tmp-1413954587.93-167869388068052'"]
<127.0.0.1> PUT /tmp/tmpr8f2Xo TO /home/vagrant/.ansible/tmp/ansible-tmp-1413954587.93-167869388068052/setup
<127.0.0.1> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ForwardAgent=yes', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/useruser/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=2222', '-o', 'IdentityFile="/home/useruser/.vagrant.d/insecure_private_key"', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '127.0.0.1', u"/bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1413954587.93-167869388068052/setup; rm -rf /home/vagrant/.ansible/tmp/ansible-tmp-1413954587.93-167869388068052/ >/dev/null 2>&1'"]
ok: [default]

TASK: [easy_install name=pip] ************************************************* 
<127.0.0.1> ESTABLISH CONNECTION FOR USER: vagrant
<127.0.0.1> REMOTE_MODULE easy_install name=pip
<127.0.0.1> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ForwardAgent=yes', '-o', 'ControlMaster=auto', '-o , 'ControlPersist=60s', '-o', 'ControlPath=/home/useruser/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=2222', '-o', 'IdentityFile="/home/useruser/.vagrant.d/insecure_private_key"', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '127.0.0.1', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1413954593.04-227274886109270 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1413954593.04-227274886109270 && echo $HOME/.ansible/tmp/ansible-tmp-1413954593.04-227274886109270'"]
<127.0.0.1> PUT /tmp/tmptFp6Ev TO /home/vagrant/.ansible/tmp/ansible-tmp-1413954593.04-227274886109270/easy_install
<127.0.0.1> EXEC ['ssh', '-C', '-tt', '-q', '-o', 'ForwardAgent=yes', '-o', 'ControlMaster=auto', '-o , 'ControlPersist=60s', '-o', 'ControlPath=/home/useruser/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=2222', '-o', 'IdentityFile="/home/useruser/.vagrant.d/insecure_private_key"', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'User=vagrant', '-o', 'ConnectTimeout=10', '127.0.0.1', u"/bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1413954593.04-227274886109270/easy_install; rm -rf /home/vagrant/.ansible/tmp/ansible-tmp-1413954593.04-227274886109270/ >/dev/null 2>&1'"]
failed: [default] => {"failed": true}
msg: Failed to find required executable easy_install

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
       to retry, use: --limit @/home/useruser/playbook.retry

default                    : ok=1    changed=0    unreachable=0    failed=1   

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

My directory tree is here: http://hastebin.com/qecudisuco.avrasm

Ultimately, the goal is to get set up for GAE work in Vagrant, with Ansible for automatic provisioning. (This may or may not be a good idea.)

failed: [default] => {"failed": true}
msg: Failed to find required executable easy_install

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/useruser/playbook.retry

default                    : ok=1    changed=0    unreachable=0    failed=1   

Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

How do I get Ansible to install pip on my Vagrant box?

Really any python provisioning will work.

I have read through the documentation or playbooks and easy_install.

For those worried about the lack of an inventory file, Vagrant creates one on its own. Vagrantup.com has a page on this, which I will post if I get the reputation necessary for it.

I'll go back through this question and revise it after I have thought about this a bit more.

by 37coins at October 22, 2014 06:07 AM

CompsciOverflow

limitation on the depth of directory tree

"write a program that creates a directory and then changes to that directory, in a loop. Make certain that the length of the absolute pathname of the leaf of this directory is greater than your system’s PATH_MAX limit. Can you call getcwd to fetch the directory’s pathname? How do the standard UNIX System tools deal with this long pathname? Can you archive the directory using either tar or cpio?"

I'm a beginner and all i have is an example in my textbook that just confuses me. please help.

by user22933 at October 22, 2014 06:05 AM

StackOverflow

tail-recursive list all sub-directories for the given file location

I want to get all not empty directories for the given file location, for example:

/src/abc
/src/abc/123/123.txt
/src/abc/abc/123.txt
/src/abc/foo/123.txt

I want to get Seq[String]

/src/abc/123
/src/abc/abc
/src/abc/foo

I use this code.

def getAllDirectories(location: String): Seq[String] = {

    def recursiveListDirectories(f: File): Seq[File] = {
      val these = f.listFiles
      val directories = these.filter(_.isDirectory)
      directories ++ directories.flatMap(recursiveListDirectories)
    }
    recursiveListDirectories(new File(location)).filter(t => !t.listFiles().forall(_.isDirectory)).map(_.getPath)
  }

I wonder how can I make recursiveListDirectories method tail-recursive ?

Many thanks in advance

by Cloud tech at October 22, 2014 05:52 AM

CompsciOverflow

Difference between parallel and concurrent buffering?

In double buffering there are two terms

Concurrent buffering.

Parallel buffering.

What is the difference between them, answer with example will be appreciated.

by MA Ali at October 22, 2014 05:41 AM

StackOverflow

backtrace a tree without mutable members

I am trying to make a recursive tree such that parent has reference to child and child has reference to parent. The thing I am trying to do is to backtrace the tree from a child whitout mutable members. It is hard because giving the child a reference to the parent in the constructor requireds the parent instance to be already created.

I only could think of two ways and both are not so good. The first way is the fallowing

  1. create child instance with function "setParent()" which only works once with the help of a private boolean variable.
  2. create parent and pass the child instance.
  3. the parent will pass itself to "setParent()".

After that, child has reference to parent and setParent() cannot be used.

The second way is to create parent and child completly separate but hold them in some sort of data structure which can search for parent of some child and the otherway arround.

If there is a better way please teach me. I work mainly in java but the question is general.

by user98456 at October 22, 2014 05:37 AM

CompsciOverflow

Translating object oriented programs to their procedural equivalents

I'm new to the field of program transformation. I'm looking for resources on the techniques and methods one would use to translate object oriented code to its procedural equivalent. In particular, I need to translate arbitrary PHP programs written with Object Oriented constructs into purely procedural PHP programs.

I know there are many source transformation tools out there that can assist in this like txl, antlr, ROSE, etc. What I'm looking for is some direction into how to deconstruct the Object Oriented features into their procedural equivalents so that I can begin to develop the set of rules that these transformation programs require.

Maybe my google-fu is not up to snuff but I am having trouble finding links to papers or websites that detail converting OO code into procedural code.

Thanks for pointers to any resources.

by user45183 at October 22, 2014 05:34 AM

StackOverflow

Unresolved sbt dependencies

I am trying to add the blueprints-sail-graph (located here) dependency via sbt, and it is having trouble resolving one of the sail dependencies. I am new to Java/Scala development and will really appreciate your help! The following is my build.sbt file:

scalaVersion := "2.10.3"

libraryDependencies ++= Seq(
  "org.scalatest" % "scalatest_2.10" % "2.0" % "test" withSources() withJavadoc(),
  "org.scalacheck" %% "scalacheck" % "1.10.0" % "test" withSources() withJavadoc(),
  "com.tinkerpop.blueprints" % "blueprints-rexster-graph" % "2.6.0" withSources() withJavadoc(),
  "com.tinkerpop.blueprints" % "blueprints-sail-graph" % "2.5.0"
)

unmanagedBase := baseDirectory.value / "lib"

resolvers += "Sonatype OSS Snapshots" at "https://oss.sonatype.org/content/repositories/snapshots"

resolvers += "Scala-Tools Maven2 Snapshots Repository" at "http://scala-tools.org/repo-snapshots"

resolvers += "Local Maven Repository" at "file://"+Path.userHome.absolutePath+"/.m2/repository"

resolvers += "JBoss repository" at "https://repository.jboss.org/nexus/content/repositories/"

The error I get from sbt is:

[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: org.restlet.jse#org.restlet;2.1.1: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[trace] Stack trace suppressed: run 'last *:update' for the full output.
[error] (*:update) sbt.ResolveException: unresolved dependency: org.restlet.jse#org.restlet;2.1.1: not found

The warnings above this error message are:

[info] Resolving org.restlet.jse#org.restlet;2.1.1 ...
[warn]  module not found: org.restlet.jse#org.restlet;2.1.1
[warn] ==== local: tried
[warn]   /home/d2b2/.ivy2/local/org.restlet.jse/org.restlet/2.1.1/ivys/ivy.xml
[warn] ==== public: tried
[warn]   http://repo1.maven.org/maven2/org/restlet/jse/org.restlet/2.1.1/org.restlet-2.1.1.pom
[warn] ==== Sonatype OSS Snapshots: tried
[warn]   https://oss.sonatype.org/content/repositories/snapshots/org/restlet/jse/org.restlet/2.1.1/org.restlet-2.1.1.pom
[warn] ==== Scala-Tools Maven2 Snapshots Repository: tried
[warn]   http://scala-tools.org/repo-snapshots/org/restlet/jse/org.restlet/2.1.1/org.restlet-2.1.1.pom
[warn] ==== Local Maven Repository: tried
[warn]   file:///home/d2b2/.m2/repository/org/restlet/jse/org.restlet/2.1.1/org.restlet-2.1.1.pom
[warn] ==== JBoss repository: tried
[warn]   https://repository.jboss.org/nexus/content/repositories/org/restlet/jse/org.restlet/2.1.1/org.restlet-2.1.1.pom

I know that the sail dependency is the issue becuase if I remove it, sbt compiles without a problem. I added the additional resolvers hoping that one of them would contain this jar -- in fact JBoss appears to, but for some reason it still did not work. I also tried many different versions of blueprints-sail-graph unsuccessfully. I am not sure what else to do, please help me get this dependency resolved.

Thanks for all the help!

EDIT: According to another post, this jar needs to be specifically added to Ivy -- hope that saves someone some time. I tried a few things with Ivy but did not succeed :(

by CodeKingPlusPlus at October 22, 2014 05:29 AM

TheoryOverflow

Difference between parallel and concurrent buffering?

In double buffering there are two terms

Concurrent buffering.

Parallel buffering.

What is the difference between them, answer with example will be appreciated.

Are they both in use now a days ? in 2014

by MA Ali at October 22, 2014 05:25 AM

CompsciOverflow

Booth's Algorithm Multiplication

When multiplying signed integers by Booth's algorithm, does the multiplicand always have to be negative? What happens if multiplier and multiplicand are both negative? Does the algorithm still work? For example how do you multiply 7x8, 7x-8, -7x-8? Here, 7,8 and -8 are represented as 8 bit signed integers)

by rtbomb at October 22, 2014 05:15 AM

QuantOverflow

Expected Shortfall and Spectral Risk Measure

Not sure I am understanding spectral risk measures correctly.

Why is there an equal weighting scheme placed on the tail losses in expected shortfall.

Will that no bias the expected value of the loss towards the lower tail because the probability that the loss will occur is small compared to that which is closer to the p-value?

by Don at October 22, 2014 05:14 AM

CompsciOverflow

What is the purpose of $\epsilon$ transition?

$A = \{a^i b^j c^k\mid i = j\text{ or } j = k; i, j, k \ge 0\}$. In its push down automaton should not there be the red colored transition instead of the black coloredenter image description here one?

by qma at October 22, 2014 05:12 AM

UnixOverflow

OpenBSD and xorg: How to fully expand shrunken vesa video?

Because OpenBSD 5.5 does not appear to support moderately newer Nvidia cards (mine is a GT 610), I am using the vesa X.Org driver. Problem with vesa is that the displayed image is shrunken and doesn't expand completely to the monitor's full area of view.

Are there any tricks or command-line things I can try to get this shrunken vesa-mode video expanded fully?

by WillBProg127 at October 22, 2014 04:52 AM

CompsciOverflow

Find k maximum numbers from a heap of size n in O(klog(k)) time

I have a binary heap with $n$ elements. I want to get the $k$ largest elements in this heap, in $O(k \log k)$ time. How do I do it?

(Calling deletemax $k$ times yields a $O(k \log n)$ complexity. I'm looking for $O(k \log k)$.)

The only solution I've come up with so far is the following:

You have 2 arrays. A(largest numbers), B(to analyze).

  • It's easy to find the largest number, since we already have the heap. We move the maximum number to $A$.
  • We move the maximum number's children to $B$
  • We sort $B$
  • We add the children of the largest number in $B$
  • Remove the largest number from B (first element of $B$), add it to $A$
  • Repeat the procedure until there are $k$ elements in $A$

The question here is: do we get a $O(k \log k)$ complexity? we obviously repeat the procedure $k$ times, but does the sorting take $O(\log k)$ time? I guess if the array is already sorted it's easy to insert a new number in $O(\log k)$ time. However, will the length of array B always be less than or equal to $k$?

Can you please confirm or deny my solution? If it's wrong, can you please help me find a solution to this problem?

by user1563544 at October 22, 2014 04:41 AM

/r/compsci

Applying multivar and matrix algebra to CS

How relevant are those two topics to computer science?

submitted by ryanac
[link] [1 comment]

October 22, 2014 04:33 AM

AP CS is killing me and I need help!

  1. Write a method called getLastName that takes a single String parameter containing someone's full name (formatted as first name followed by a space followed by the last name), and returns a String containing just the last name. For example, getLastName("Benedict Cumberbatch") should return "Cumberbatch".

2.Write a method calculateCorrectChange that takes two parameter. The first parameter is a double type called salesAmount indicating the price of an item. The second parameter is an int type called payment indicating a whole number of dollars that the customer pays. The method returns a string as follows: "Your change is $0.25" if the payment is greater than he sales amount (where "$0.25" is replaced by the actual change the customer should receive), and otherwise "Insufficient fund." if the payment is less than the sales amount.

These two problems are killing me, I am using the program called Eclipse, pls help!!!

submitted by XAznBeastX
[link] [10 comments]

October 22, 2014 04:15 AM

XKCD

CompsciOverflow

prove 3x+1 problem is undecidable

Let f(x) = 3x + 1 if x is odd, or f(x) = x/2 if x is even. if you start with an integer x and iterate f, you obtain a sequence x, f(x), f(f(x)), …, stop if you ever hit 1. For example, if x =17, you get the sequenec 17,52,26,13,40,20,10,5,16,8,4,2,1 Extensive computer test have shown that every starting point between 1 and a large positive integer gives a sequence that ends in 1. But the question whether all positive integer points end up at 1 is unsolved. It is called the 3x + 1 problem. Suppose that ATM were decidable by a TM H. Use H to design a TM that is to solve the 3x + 1 problem.

by Alex at October 22, 2014 03:40 AM

/r/emacs

TheoryOverflow

Nondeterministic pushdown automaton

I have a solution for the pushdown automaton that accepts this language: enter image description here

which looks like: enter image description here

I am trying to work from this in order to produce a pushdown automaton that accepts this language: enter image description here

enter image description here

How can I modify the first solution to work for this new language?

by donth77 at October 22, 2014 03:16 AM

CompsciOverflow

Grammar LL parser

I have this grammar below and trying to figure out if can be parsed using the LL parser? If not, please explain.

S --> ab | cB
A --> b | Bb
B --> aAb | cC
C --> cA | Aba

From what I understand the intersection of the two sets must be empty to pass the pairwise disjointness test.

But I am not sure where to begin and have been looking through my textbook and http://en.wikipedia.org/wiki/LL_parser#Parsing_procedure but can't quite understand or find any examples to follow along. I was watching this https://www.youtube.com/watch?v=N9UuAPU6DAg video and says to Compute FIRST sets for all the non-terminals, and check to see if FIRST sets for the alternatives of a given non-terminal are all disjoint. If all are, it is LL, and if there are any non-terminals for which they are not it is not. If there are any ε rules, you'll need FOLLOW sets as well. But how can I compute the First sets? to do this problem?

by Jessica Dinh at October 22, 2014 03:09 AM

StackOverflow

How to change a private function in Clojure lib

Say I'm using a Clojure library that has a private function that doesn't work the way I need it to, for example maybe it returns maps with strings as keys and I want it to return keywords as keys (imagine for now it would be much more efficient to do this than for me to write a function to convert between the two). I can't change the definition of the function by alter-var-root because it's private, but is there anything else I could do to change it ?

edit: I was wrong - you can change a private function with alter-var-root, and I have been able to change the implementation of the offending function as I wanted. Yay for mutable namespaces!

by Hendekagon at October 22, 2014 03:07 AM

How to set Akka actors run only for specific time period?

I have a big task,which i break down into smaller task and analyse them. I have a basic model.

Master,worker and listener .

Master creates the tasks,give them to worker actors. Once an worker actor completes,it asks for another task from the master. Once all task is completed ,they inform the listener. They usually take around less than 2 minutes to complete 1000 tasks.

Now,Some time the time taken for some tasks might be more than others. I want to set timer for each task,and if a task takes more time,then worker task should be aborted by the master and the task has to be resubmitted later as new one. How to implement this? I can calculate the time taken by a worker task,but how Master actor keeps tab on time taken by all worker actors in real time?

by Balaram26 at October 22, 2014 03:04 AM

CompsciOverflow

Nondeterministic pushdown automata

I have a solution for the pushdown automata that accepts this language: enter image description here

which looks like: enter image description here

I am trying to work from this in order to produce a pushdown automata that accepts this language: enter image description here

enter image description here

by donth77 at October 22, 2014 02:41 AM

Wes Felter

StackOverflow

Apache Spark - Generate List Of Pairs

Given a large file containing data of the form, (V1,V2,...,VN)

2,5
2,8,9
2,5,8
...

I am trying to achieve a list of pairs similar to the following using Spark

((2,5),2)
((2,8),2)
((2,9),1)
((8,9),1)
((5,8),1)

I tried the suggestions mentioned in response to an older question, but I have encountered some issues. For example,

val dataRead = sc.textFile(inputFile)
val itemCounts = dataRead
  .flatMap(line => line.split(","))
  .map(item => (item, 1))
  .reduceByKey((a, b) => a + b)
  .cache()
val nums = itemCounts.keys
  .filter({case (a) => a.length > 0})
  .map(x => x.trim.toInt)
val pairs = nums.flatMap(x => nums2.map(y => (x,y)))

I got the error,

scala> val pairs = nums.flatMap(x => nums.map(y => (x,y)))
<console>:27: error: type mismatch;
 found   : org.apache.spark.rdd.RDD[(Int, Int)]
 required: TraversableOnce[?]
       val pairs = nums.flatMap(x => nums.map(y => (x,y)))
                                             ^

Could someone please point me towards what I might be doing incorrectly, or what might be a better way to achieve the same? Many thanks in advance.

by user799188 at October 22, 2014 02:15 AM

Wes Felter

Gurucharan Shetty: Integrate Docker containers with Open vSwitch

Gurucharan Shetty: Integrate Docker containers with Open vSwitch:

"Did you hear about the startup that’s integrating OVS and Docker?"
“I think I’ll replace them with a small shell script.”

October 22, 2014 02:15 AM

StackOverflow

apache spark yarn cluster

I am trying to run a spark stand alone appliaction in yarn-client mode. I am getting class not found Exception(below). I am wondering how to include or add the hadoop/yarn files to the stand alone application ?

Caused by: org.apache.spark.SparkException: Unable to load YARN support at org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:106) at org.apache.spark.deploy.SparkHadoopUtil$.(SparkHadoopUtil.scala:101) at org.apache.spark.deploy.SparkHadoopUtil$.(SparkHadoopUtil.scala) ... 22 more Caused by: java.lang.ClassNotFoundException: org.apache.spark.deploy.yarn.YarnSparkHadoopUtil at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

If I include spark-assembly-1.1.0-hadoop2.4.0.jar in the class path I get below exception.

Caused by: org.apache.spark.SparkException: YARN mode not available ? at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1534) at org.apache.spark.SparkContext.(SparkContext.scala:307) at org.apache.spark.SparkContext.(SparkContext.scala:97) at com.compellon.engine.web.RestService$class.initializeSpark(RestService.scala:87) at com.compellon.engine.web.RestService$class.$init$(RestService.scala:92) at com.compellon.engine.web.RestServiceActor.(RestService.scala:33) ... 17 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.spark.SparkContext$.org$apache$spark$SparkContext$$createTaskScheduler(SparkContext.scala:1531) ... 22 more Caused by: java.lang.NoSuchMethodError: org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.minRegisteredRatio_$eq(D)V

Any pointers to or a hello-world example of stand alone application creating SparkContext pointing to yarn-client cluster will be great.

Thank you

by firemonkey at October 22, 2014 02:01 AM

/r/compsci

How can I make extensible heuristics?

Here is my problem: I have a program that takes user input and then parses the input. I need some way to decide what the parsed input applies to. I know that it will apply to one (and only one) of the services that I have in the program. In order to make sure that the parsed input gets directed to the correct service, I would like to use a heuristic and use it to pass the parsed input in the right direction.

What I know doesn't work is to look at the parsed input for words that I like and then add (or subtract) from a heuristic value for every service. This is an issue because if someone else decides to do things on a scale of 0-1 while I'm doing things on a scale of 0-999 then even if the other person's is a better match, mine is quite likely to be chosen.

How can I effectively code a set of heuristics when I don't know how many services the completed project will have or even who it will be that writes them. Is there any way to make sure that all of the parts play nicely with each other?

submitted by Bonooru
[link] [1 comment]

October 22, 2014 01:57 AM

AWS

DragonFly BSD Digest

For the next DragonFly release

I noted the last few things that should be committed before the DragonFly release.  People have spoken up for most of them, but it wouldn’t hurt to try recent -master with the upmap/kpmap work that recently went in.  Benchmarks wouldn’t be a bad idea, either.

by Justin Sherrill at October 22, 2014 01:46 AM

CompsciOverflow

Average number of comparisons to locate item in BST

This is a GRE practice question.

BST n=8

If a node in the binary search tree above is to be located by binary tree search, what is the expected number of comparisons required to locate one of the items (nodes) in the tree chosen at random?

(A) 1.75

(B) 2

(C) 2.75

(D) 3

(E) 3.25

My answer was 3 because $n=8$ and $\lg(n)$ comparisons should be made, and $\lg(8) = 3$. But the correct answer is 2.75. Can someone explain the correct answer? Thanks!

by Tootsie Rolls at October 22, 2014 01:42 AM

arXiv Cryptography and Security

Testing Security Policies for Distributed Systems: Vehicular Networks as a Case Study. (arXiv:1410.5789v1 [cs.CR])

Due to the increasing complexity of distributed systems, security testing is becoming increasingly critical in insuring reliability of such systems in relation to their security requirements. . To challenge this issue, we rely in this paper1 on model based active testing. In this paper we propose a framework to specify security policies and test their implementation. Our framework makes it possible to automatically generate test sequences, in order to validate the conformance of a security policy. This framework contains several new methods to ease the test case generation. To demonstrate the reliability of our framework, we present a Vehicular Networks System as an ongoing case study.

by <a href="http://arxiv.org/find/cs/1/au:+Aouadi_M/0/1/0/all/0/1">Mohamed H. E. Aouadi</a>, <a href="http://arxiv.org/find/cs/1/au:+Toumi_K/0/1/0/all/0/1">Khalifa Toumi</a>, <a href="http://arxiv.org/find/cs/1/au:+Cavalli_A/0/1/0/all/0/1">Ana Cavalli</a> at October 22, 2014 01:30 AM

Optimal Feature Selection from VMware ESXi 5.1 Feature Set. (arXiv:1410.5784v1 [cs.DC])

A study of VMware ESXi 5.1 server has been carried out to find the optimal set of parameters which suggest usage of different resources of the server. Feature selection algorithms have been used to extract the optimum set of parameters of the data obtained from VMware ESXi 5.1 server using esxtop command. Multiple virtual machines (VMs) are running in the mentioned server. K-means algorithm is used for clustering the VMs. The goodness of each cluster is determined by Davies Bouldin index and Dunn index respectively. The best cluster is further identified by the determined indices. The features of the best cluster are considered into a set of optimal parameters.

by <a href="http://arxiv.org/find/cs/1/au:+Hatua_A/0/1/0/all/0/1">Amartya Hatua</a> at October 22, 2014 01:30 AM

Lightweight Verification of Markov Decision Processes with Rewards. (arXiv:1410.5782v1 [cs.LO])

Markov decision processes are useful models of concurrency optimisation problems, but are often intractable for exhaustive verification methods. Recent work has introduced lightweight approximative techniques that sample directly from scheduler space, bringing the prospect of scalable alternatives to standard numerical algorithms. The focus so far has been on optimising the probability of a property, but many problems require quantitative analysis of rewards. In this work we therefore present lightweight verification algorithms to optimise the rewards of Markov decision processes. We provide the statistical confidence bounds that this necessitates and demonstrate our approach on standard case studies.

by <a href="http://arxiv.org/find/cs/1/au:+Legay_A/0/1/0/all/0/1">Axel Legay</a>, <a href="http://arxiv.org/find/cs/1/au:+Sedwards_S/0/1/0/all/0/1">Sean Sedwards</a>, <a href="http://arxiv.org/find/cs/1/au:+Traonouez_L/0/1/0/all/0/1">Louis-Marie Traonouez</a> at October 22, 2014 01:30 AM

Proving Safety with Trace Automata and Bounded Model Checking. (arXiv:1410.5764v1 [cs.FL])

Loop under-approximation is a technique that enriches C programs with additional branches that represent the effect of a (limited) range of loop iterations. While this technique can speed up the detection of bugs significantly, it introduces redundant execution traces which may complicate the verification of the program. This holds particularly true for verification tools based on Bounded Model Checking, which incorporate simplistic heuristics to determine whether all feasible iterations of a loop have been considered.

We present a technique that uses \emph{trace automata} to eliminate redundant executions after performing loop acceleration. The method reduces the diameter of the program under analysis, which is in certain cases sufficient to allow a safety proof using Bounded Model Checking. Our transformation is precise---it does not introduce false positives, nor does it mask any errors. We have implemented the analysis as a source-to-source transformation, and present experimental results showing the applicability of the technique.

by <a href="http://arxiv.org/find/cs/1/au:+Kroening_D/0/1/0/all/0/1">Daniel Kroening</a>, <a href="http://arxiv.org/find/cs/1/au:+Lewis_M/0/1/0/all/0/1">Matt Lewis</a>, <a href="http://arxiv.org/find/cs/1/au:+Weissenbacher_G/0/1/0/all/0/1">Georg Weissenbacher</a> at October 22, 2014 01:30 AM

A Computer Virus Propagation Model Using Delay Differential Equations With Probabilistic Contagion And Immunity. (arXiv:1410.5718v1 [cs.SI])

The SIR model is used extensively in the field of epidemiology, in particular, for the analysis of communal diseases. One problem with SIR and other existing models is that they are tailored to random or Erdos type networks since they do not consider the varying probabilities of infection or immunity per node. In this paper, we present the application and the simulation results of the pSEIRS model that takes into account the probabilities, and is thus suitable for more realistic scale free networks. In the pSEIRS model, the death rate and the excess death rate are constant for infective nodes. Latent and immune periods are assumed to be constant and the infection rate is assumed to be proportional to I (t) N(t), where N (t) is the size of the total population and I(t) is the size of the infected population. A node recovers from an infection temporarily with a probability p and dies from the infection with probability (1-p).

by <a href="http://arxiv.org/find/cs/1/au:+Khan_M/0/1/0/all/0/1">M. S. S. Khan</a> at October 22, 2014 01:30 AM

Robust Multidimensional Mean-Payoff Games are Undecidable. (arXiv:1410.5703v1 [cs.LO])

Mean-payoff games play a central role in quantitative synthesis and verification. In a single-dimensional game a weight is assigned to every transition and the objective of the protagonist is to assure a non-negative limit-average weight. In the multidimensional setting, a weight vector is assigned to every transition and the objective of the protagonist is to satisfy a boolean condition over the limit-average weight of each dimension, e.g., $\LimAvg(x_1) \leq 0 \vee \LimAvg(x_2)\geq 0 \wedge \LimAvg(x_3) \geq 0$. We recently proved that when one of the players is restricted to finite-memory strategies then the decidability of determining the winner is inter-reducible with Hilbert's Tenth problem over rationals (a fundamental long-standing open problem). In this work we allow arbitrary (infinite-memory) strategies for both players and we show that the problem is undecidable.

by <a href="http://arxiv.org/find/cs/1/au:+Velner_Y/0/1/0/all/0/1">Yaron Velner</a> at October 22, 2014 01:30 AM

Dynamic Optimization For Heterogeneous Powered Wireless Multimedia Sensor Networks With Correlated Sources and Network Coding. (arXiv:1410.5697v1 [cs.NI])

The energy consumption in wireless multimedia sensor networks (WMSN) is much greater than that in traditional wireless sensor networks. Thus, it is a huge challenge to remain the perpetual operation for WMSN. In this paper, we propose a new heterogeneous energy supply model for WMSN through the coexistence of renewable energy and electricity grid. We address to cross-layer optimization for the multiple multicast with distributed source coding and intra-session network coding in heterogeneous powered wireless multimedia sensor networks (HPWMSN) with correlated sources. The aim is to achieve the optimal reconstruct distortion at sinks and the minimal cost of purchasing electricity from electricity grid. Based on the Lyapunov drift-plus-penalty with perturbation technique and dual decomposition technique, we propose a fully distributed dynamic cross-layer algorithm, including multicast routing, source rate control, network coding, session scheduling and energy management, only requiring knowledge of the instantaneous system state. The explicit trade-off between the optimization objective and queue backlog is theoretically proven. Finally, the simulation results verify the theoretic claims.

by <a href="http://arxiv.org/find/cs/1/au:+Xu_W/0/1/0/all/0/1">Weiqiang Xu</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_Y/0/1/0/all/0/1">Yushu Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Shi_Q/0/1/0/all/0/1">Qingjiang Shi</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_X/0/1/0/all/0/1">Xiaodong Wang</a> at October 22, 2014 01:30 AM

DAPriv: Decentralized architecture for preserving the privacy of medical data. (arXiv:1410.5696v1 [cs.CR])

The digitization of the medical data has been a sensitive topic. In modern times laws such as the HIPAA provide some guidelines for electronic transactions in medical data to prevent attacks and fraudulent usage of private information. In our paper, we explore an architecture that uses hybrid computing with decentralized key management and show how it is suitable in preventing a special form of re-identification attack that we name as the re-assembly attack. This architecture would be able to use current infrastructure from mobile phones to server certificates and cloud based decentralized storage models in an efficient way to provide a reliable model for communication of medical data. We encompass entities including patients, doctors, insurance agents, emergency contacts, researchers, medical test laboratories and technicians. This is a complete architecture that provides patients with a good level of privacy, secure communication and more direct control.

by <a href="http://arxiv.org/find/cs/1/au:+Sharma_R/0/1/0/all/0/1">Rajesh Sharma</a>, <a href="http://arxiv.org/find/cs/1/au:+Subramanian_D/0/1/0/all/0/1">Deepak Subramanian</a>, <a href="http://arxiv.org/find/cs/1/au:+Srirama_S/0/1/0/all/0/1">Satish N. Srirama</a> at October 22, 2014 01:30 AM

Divide and Conquer: Partitioning OSPF networks with SDN. (arXiv:1410.5626v1 [cs.NI])

Software Defined Networking (SDN) is an emerging network control paradigm focused on logical centralization and programmability. At the same time, distributed routing protocols, most notably OSPF and IS-IS, are still prevalent in IP networks, as they provide shortest path routing, fast topological convergence after network failures, and, perhaps most importantly, the confidence based on decades of reliable operation. Therefore, a hybrid SDN/OSPF operation remains a desirable proposition. In this paper, we propose a new method of hybrid SDN/OSPF operation. Our method is different from other hybrid approaches, as it uses SDN nodes to partition an OSPF domain into sub-domains thereby achieving the traffic engineering capabilities comparable to full SDN operation. We place SDN-enabled routers as sub-domain border nodes, while the operation of the OSPF protocol continues unaffected. In this way, the SDN controller can tune routing protocol updates for traffic engineering purposes before they are flooded into sub-domains. While local routing inside sub-domains remains stable at all times, inter-sub-domain routes can be optimized by determining the routes in each traversed sub-domain. As the majority of traffic in non-trivial topologies has to traverse multiple sub-domains, our simulation results confirm that a few SDN nodes allow traffic engineering up to a degree that renders full SDN deployment unnecessary.

by <a href="http://arxiv.org/find/cs/1/au:+Caria_M/0/1/0/all/0/1">Marcel Caria</a>, <a href="http://arxiv.org/find/cs/1/au:+Das_T/0/1/0/all/0/1">Tamal Das</a>, <a href="http://arxiv.org/find/cs/1/au:+Jukan_A/0/1/0/all/0/1">Admela Jukan</a>, <a href="http://arxiv.org/find/cs/1/au:+Hoffmann_M/0/1/0/all/0/1">Marco Hoffmann</a> at October 22, 2014 01:30 AM

Z2-double cyclic codes. (arXiv:1410.5604v1 [cs.IT])

A binary linear code $C$ is a $\mathbb{Z}_2$-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the $\mathbb{Z}_2[x]$-module $\mathbb{Z}_2[x]/(x^r-1)\times\mathbb{Z}_2[x]/(x^s-1).$ We determine the structure of $\mathbb{Z}_2$-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of $\mathbb{Z}_2$-double cyclic codes and its duals, and the relations between the polynomial generators of these codes are studied.

by <a href="http://arxiv.org/find/cs/1/au:+Borges_J/0/1/0/all/0/1">Joaquim Borges</a>, <a href="http://arxiv.org/find/cs/1/au:+Fernandez_Cordoba_C/0/1/0/all/0/1">Cristina Fern&#xe1;ndez-C&#xf3;rdoba</a>, <a href="http://arxiv.org/find/cs/1/au:+Ten_Valls_R/0/1/0/all/0/1">Roger Ten-Valls</a> at October 22, 2014 01:30 AM

Pushing the envelope of Optimization Modulo Theories with Linear-Arithmetic Cost Functions. (arXiv:1410.5568v1 [cs.LO])

In the last decade we have witnessed an impressive progress in the expressiveness and efficiency of Satisfiability Modulo Theories (SMT) solving techniques. This has brought previously-intractable problems at the reach of state-of-the-art SMT solvers, in particular in the domain of SW and HW verification. Many SMT-encodable problems of interest, however, require also the capability of finding models that are optimal wrt. some cost functions. In previous work, namely "Optimization Modulo Theory with Linear Rational Cost Functions -- OMT(LAR U T )", we have leveraged SMT solving to handle the minimization of cost functions on linear arithmetic over the rationals, by means of a combination of SMT and LP minimization techniques. In this paper we push the envelope of our OMT approach along three directions: first, we extend it to work also with linear arithmetic on the mixed integer/rational domain, by means of a combination of SMT, LP and ILP minimization techniques; second, we develop a multi-objective version of OMT, so that to handle many cost functions simultaneously; third, we develop an incremental version of OMT, so that to exploit the incrementality of some OMT-encodable problems. An empirical evaluation performed on OMT-encoded verification problems demonstrates the usefulness and efficiency of these extensions.

by <a href="http://arxiv.org/find/cs/1/au:+Sebastiani_R/0/1/0/all/0/1">Roberto Sebastiani</a>, <a href="http://arxiv.org/find/cs/1/au:+Trentin_P/0/1/0/all/0/1">Patrick Trentin</a> at October 22, 2014 01:30 AM

Cryptographic Enforcement of Information Flow Policies without Public Information. (arXiv:1410.5567v1 [cs.CR])

Cryptographic access control has been studied for over 30 years and is now a mature research topic. When symmetric cryptographic primitives are used, each protected resource is encrypted and only authorized users should have access to the encryption key. By treating the keys themselves as protected resources, it is possible to develop schemes in which authorized keys are derived from the keys explicitly assigned to the user's possession and publicly available information. It has been generally assumed that each user would be assigned a single key from which all other authorized keys would be derived. Recent work has challenged this assumption by developing schemes that do not require public information, the trade-off being that a user may require more than one key. However, these new schemes, which require a chain partition of the partially ordered set on which the access control policy is based, have some disadvantages. In this paper we define the notion of a tree-based cryptographic enforcement scheme, which, like chain-based schemes, requires no public information. We establish that the strong security properties of chain-based schemes are preserved by tree-based schemes, and provide an efficient construction for deriving a tree-based enforcement scheme from a given policy that minimizes the number of keys required.

by <a href="http://arxiv.org/find/cs/1/au:+Crampton_J/0/1/0/all/0/1">Jason Crampton</a>, <a href="http://arxiv.org/find/cs/1/au:+Farley_N/0/1/0/all/0/1">Naomi Farley</a>, <a href="http://arxiv.org/find/cs/1/au:+Gutin_G/0/1/0/all/0/1">Gregory Gutin</a>, <a href="http://arxiv.org/find/cs/1/au:+Jones_M/0/1/0/all/0/1">Mark Jones</a>, <a href="http://arxiv.org/find/cs/1/au:+Poettering_B/0/1/0/all/0/1">Betram Poettering</a> at October 22, 2014 01:30 AM

Certified Connection Tableaux Proofs for HOL Light and TPTP. (arXiv:1410.5476v1 [cs.LO])

In the recent years, the Metis prover based on ordered paramodulation and model elimination has replaced the earlier built-in methods for general-purpose proof automation in HOL4 and Isabelle/HOL. In the annual CASC competition, the leanCoP system based on connection tableaux has however performed better than Metis. In this paper we show how the leanCoP's core algorithm can be implemented inside HOLLight. leanCoP's flagship feature, namely its minimalistic core, results in a very simple proof system. This plays a crucial role in extending the MESON proof reconstruction mechanism to connection tableaux proofs, providing an implementation of leanCoP that certifies its proofs. We discuss the differences between our direct implementation using an explicit Prolog stack, to the continuation passing implementation of MESON present in HOLLight and compare their performance on all core HOLLight goals. The resulting prover can be also used as a general purpose TPTP prover. We compare its performance against the resolution based Metis on TPTP and other interesting datasets.

by <a href="http://arxiv.org/find/cs/1/au:+Kaliszyk_C/0/1/0/all/0/1">Cezary Kaliszyk</a>, <a href="http://arxiv.org/find/cs/1/au:+Urban_J/0/1/0/all/0/1">Josef Urban</a>, <a href="http://arxiv.org/find/cs/1/au:+Vyskocil_J/0/1/0/all/0/1">Jiri Vyskocil</a> at October 22, 2014 01:30 AM

Machine Learning of Coq Proof Guidance: First Experiments. (arXiv:1410.5467v1 [cs.LO])

We report the results of the first experiments with learning proof dependencies from the formalizations done with the Coq system. We explain the process of obtaining the dependencies from the Coq proofs, the characterization of formulas that is used for the learning, and the evaluation method. Various machine learning methods are compared on a dataset of 5021 toplevel Coq proofs coming from the CoRN repository. The best resulting method covers on average 75% of the needed proof dependencies among the first 100 predictions, which is a comparable performance of such initial experiments on other large-theory corpora.

by <a href="http://arxiv.org/find/cs/1/au:+Kaliszyk_C/0/1/0/all/0/1">Cezary Kaliszyk</a>, <a href="http://arxiv.org/find/cs/1/au:+Mamane_L/0/1/0/all/0/1">Lionel Mamane</a>, <a href="http://arxiv.org/find/cs/1/au:+Urban_J/0/1/0/all/0/1">Josef Urban</a> at October 22, 2014 01:30 AM

/r/netsec

/r/clojure

Planet Theory

Optimal randomized incremental construction for guaranteed logarithmic planar point location

Authors: Michael Hemmer, Michal Kleinbort, Dan Halperin
Download: PDF
Abstract: Given a planar map of $n$ segments in which we wish to efficiently locate points, we present the first randomized incremental construction of the well-known trapezoidal-map search-structure that only requires expected $O(n \log n)$ preprocessing time while deterministically guaranteeing worst-case linear storage space and worst-case logarithmic query time. This settles a long standing open problem; the best previously known construction time of such a structure, which is based on a directed acyclic graph, so-called the history DAG, and with the above worst-case space and query-time guarantees, was expected $O(n \log^2 n)$. The result is based on a deeper understanding of the structure of the history DAG, its depth in relation to the length of its longest search path, as well as its correspondence to the trapezoidal search tree. Our results immediately extend to planar maps induced by finite collections of pairwise interior disjoint well-behaved curves.

October 22, 2014 12:41 AM

A simpler and better LSH for Maximum Inner Product Search (MIPS)

Authors: Behnam Neyshabur, Nathan Srebro
Download: PDF
Abstract: In a recent manuscript ("Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS)", available on arXiv and to be presented in the upcoming NIPS), Shrivastava and Li argue that there is no symmetric LSH for the problem of Maximum Inner Product Search and propose an asymmetric LSH based on different mappings for query and database points. We show a simple LSH for the problem, using a simple symmetric mapping, with better performance, both theoretically and empirically.

October 22, 2014 12:41 AM

Rearrangement Problems with Duplicated Genomic Markers

Authors: Antoine Thomas
Download: PDF
Abstract: Understanding the dynamics of genome rearrangements is a major issue of phylogenetics. Phylogenetics is the study of species evolution. A major goal of the field is to establish evolutionary relationships within groups of species, in order to infer the topology of an evolutionary tree formed by this group and common ancestors to some of these species. In this context, having means to evaluate relative evolutionary distances between species, or to infer common ancestor genomes to a group of species would be of great help. This work, in the vein of other studies from the past, aims at designing such means, here in the particular case where genomes present multiple occurrencies of genes, which makes things more complex. Several hypotheses accounting for the presence of duplications were considered. Distances formulae as well as scenario computing algorithms were established, along with their complexity proofs.

October 22, 2014 12:41 AM

Building a Balanced k-d Tree in Logarithmic Time

Authors: Russell A. Brown
Download: PDF
Abstract: The original description of the k-d tree recognized that rebalancing techniques, such as are used to build an AVL tree, are not applicable to a k-d tree. Hence, in order to build a balanced k-d tree, it is necessary to find the median of the data for each recursive subdivision of those data. The sort or selection that is used to find the median for each subdivision strongly influences the computational complexity of building a k-d tree. This paper discusses an alternate approach that builds a balanced k-d tree by first sorting the data in each of k dimensions prior to building the tree, then constructs the tree in a manner that preserves the order of the sorts and thereby avoids the requirement for any further sorting.

October 22, 2014 12:41 AM

Stochastic billiards for sampling from the boundary of a convex set

Authors: A. B. Dieker, Santosh Vempala
Download: PDF
Abstract: Stochastic billiards can be used for approximate sampling from the boundary of a bounded convex set through the Markov Chain Monte Carlo (MCMC) paradigm. This paper studies how many steps of the underlying Markov chain are required to get samples (approximately) from the uniform distribution on the boundary of the set, for sets with an upper bound on the curvature of the boundary. Our main theorem implies a polynomial-time algorithm for sampling from the boundary of such sets.

October 22, 2014 12:41 AM

Simple PTAS's for families of graphs excluding a minor

Authors: Sergio Cabello, David Gajser
Download: PDF
Abstract: We show that very simple algorithms based on local search are polynomial-time approximation schemes for Maximum Independent Set, Minimum Vertex Cover and Minimum Dominating Set, when the input graphs have a fixed forbidden minor.

October 22, 2014 12:41 AM

Polynomials: a new tool for length reduction in binary discrete convolutions

Authors: Amihood Amir, Oren Kapah, Ely Porat, Amir Rothschild
Download: PDF
Abstract: Efficient handling of sparse data is a key challenge in Computer Science. Binary convolutions, such as polynomial multiplication or the Walsh Transform are a useful tool in many applications and are efficiently solved.

In the last decade, several problems required efficient solution of sparse binary convolutions. both randomized and deterministic algorithms were developed for efficiently computing the sparse polynomial multiplication. The key operation in all these algorithms was length reduction. The sparse data is mapped into small vectors that preserve the convolution result. The reduction method used to-date was the modulo function since it preserves location (of the "1" bits) up to cyclic shift.

To date there is no known efficient algorithm for computing the sparse Walsh transform. Since the modulo function does not preserve the Walsh transform a new method for length reduction is needed. In this paper we present such a new method - polynomials. This method enables the development of an efficient algorithm for computing the binary sparse Walsh transform. To our knowledge, this is the first such algorithm. We also show that this method allows a faster deterministic computation of sparse polynomial multiplication than currently known in the literature.

October 22, 2014 12:40 AM

On computational complexity of length embeddability of graphs

Authors: Mikhail Tikhomirov
Download: PDF
Abstract: A graph $G$ is embeddable in $\mathbb{R}^d$ if vertices of $G$ can be assigned with points of $\mathbb{R}^d$ in such a way that all pairs of adjacent vertices are at the distance 1. We show that verifying embeddability of a given graph in $\mathbb{R}^d$ is NP-hard in the case $d > 2$ for all reasonable notions of embeddability.

October 22, 2014 12:40 AM

Generalized Compression Dictionary Distance as Universal Similarity Measure

Authors: Andrey Bogomolov, Bruno Lepri, Fabio Pianesi
Download: PDF
Abstract: We present a new similarity measure based on information theoretic measures which is superior than Normalized Compression Distance for clustering problems and inherits the useful properties of conditional Kolmogorov complexity. We show that Normalized Compression Dictionary Size and Normalized Compression Dictionary Entropy are computationally more efficient, as the need to perform the compression itself is eliminated. Also they scale linearly with exponential vector size growth and are content independent. We show that normalized compression dictionary distance is compressor independent, if limited to lossless compressors, which gives space for optimizations and implementation speed improvement for real-time and big data applications. The introduced measure is applicable for machine learning tasks of parameter-free unsupervised clustering, supervised learning such as classification and regression, feature selection, and is applicable for big data problems with order of magnitude speed increase.

October 22, 2014 12:40 AM

Lobsters

QuantOverflow

Negative Eonia rates

I'm curious how the current negative Eonia (Euro OverNight Index Average) rates would impact derivatives pricing. Does it mean that if I post cash collateral to you, I also need to pay you interest?

More generally, does it mean that the classical interest-rate modelling assumption that interest rates can't go negative is now invalid?

by Student T at October 22, 2014 12:11 AM

HN Daily

Planet Clojure

Flaky crusts: Good for pies; bad for test suites

Unit tests that fail intermittently, without exposing any production bugs, are the worst. Like the boy who cried wolf, they reduce our confidence that they're telling us the truth. Without that confidence, test suite failures become everyday occurrences, and when a real bug rears its ugly head, we may miss it.

In this post, we're going to look at, and learn to avoid, one of the most annoying and time-wasting causes of flaky tests: test pollution. With some varieties of flaky tests, we can look at the failing build, look at the test, and spot the error right away. Unfortunately, that's not often the case with pollution. We're talking here about operations performed in one test that affect the outcome of other tests. In order to avoid pollution, every test should be self-contained and unaffected by whatever happens in other tests.

Of course, most testing libraries make every effort to protect us from these kinds of mistakes. JUnit gives us a fresh instance of the test class for every test, preventing us from changing fields in one test and having those changes propagate to the next one. RSpec's mock object library resets all the mocks and stubs before running each test.

But we application programmers still can and do shoot ourselves in the foot, by finding some shared mutable state to muck up.

Class variables (aka global variables)

In object-oriented languages, this shared mutable state is often held in class variables:

class System
  class << self
    attr_accessor :current_user
  end
end

class Authentication
  def login_user(user)
    if user.valid_credentials?
      System.current_user = user
    else
      System.current_user = nil
    end
  end
end

And as we all know, class variables are globals, and global variables have some issues. What harm can they cause here? Take the following tests, for example, each reasonable on its own:

# spec/system_spec.rb
require 'rspec'
require 'system'

describe System do
  describe '.current_user' do
    it 'is nil by default' do
      expect(System.current_user).to eq(nil)
    end
  end
end
# spec/authentication_spec.rb
require 'rspec'
require 'authentication'
require 'system'
require 'user'

describe Authentication do
  before do
    @auth = Authentication.new
    @valid_user = User.new(:login => "guest",
                           :password => "guest")
  end

  describe '#login_user' do
    it 'logs a user in' do
      @auth.login_user(@valid_user)
      expect(System.current_user).to eq(@valid_user)
    end
  end
end

Run these two tests individually, as many times as you want, and they'll always pass. Run them in your test suite, and they'll always fail! Run them in a random order, and you'll fail or pass sometimes, depending on the ordering between them. These tests read and clobber, respectively, the shared mutable state in System.current_user. So unless we take explicit action when tearing down test #1, or when setting up test #2, we're dependent on test ordering to determine whether our test suite succeeds or fails.

Now that we've found the bug, the fix is clear: reset any shared mutable state. We could do this either as part of the test setup or teardown—I don't particularly care which, as long as it's consistent.

Incidentally, things like thread-local variables can have exactly the same issues in environments where threads come from a thread pool (which is to say, most environments). A newly-checked-out thread, without some tweaking, will still have the state from its previous use. So don't be fooled into thinking you're safer using thread-local variables than class variables.

External data

External datastores or services are another obvious locus of test pollution. There are a few fine strategies out there for tearing down database state after tests: running the whole test in a transaction, truncating the tables after the test run, etc. If your continuous integration build allows multiple concurrent test runs to happen against the exact same database, you're in a similar situation. This kind of test pollution is across test suites, which is even nastier because any given test suite may always pass when run alone, but then fail in CI. If this sort of setup is necessary, only the transaction solution will work, and even then only if the database isolation levels allow one suite's activity to be invisible to the other. Of course, if those isolation levels don't match what's in production, there may be conflicting goals here.

In some test suites, the main database may be safe to change during tests due to some automatic teardown, but other datastores may be problematic. Perhaps there are interactions with a cache, a message queue, another local internal application, a third-party API, or even the filesystem. If we're following Michael Feathers's definition of a unit test, we won't call these tests "unit tests." Nevertheless, some tests are bound to interact with these kinds of external stores, and those tests are at risk of pollution if we're not careful with state.

State hangs out in weird places

If you're using Clojure, all this probably sounds obvious. We've heard over and over that shared mutable state is a problem, and we know that we need to be using it only in disciplined ways. We avoid a lot of these problems by default in Clojure, but there are some non-obvious places where it can rear its ugly head. Check this out:

(defn find-organization [id]
  (or (db/find *db-connection* "organizations" id)
      (make-null-organization)))
(def get-organization (memoize find-organization))

This memoized function was written with the (probably naive) assumption that the organizations in the database are static, never changing while the production app is running. This lets us skip database lookups when we'd already made them. More concretely, get-organization has a memoization cache that is never cleared. Our assumption may actually (in rare cases) have been valid for our production code, but a good set of tests is likely to want to cover multiple possible states of the database.

So we aren't using the typical state atom or ref that you'd usually look for in Clojure when you're hunting test pollution. But nevertheless, a test could clear out or with-redefs away all the database state all it wanted, and it could still get results from a previous test run if the test result is dependent on the return value from get-organization.

How do you find the pollution?

So you've got this huge test suite that fails some of the time and you suspect it might be due to test pollution. How do you track down the culprit(s)? A good first approach is to take a close look at the failing test(s), and pore over every interaction with shared mutable state there. Luckily, this is usually enough to track down what kind of pollution is going on.

But depending on how pervasive the interactions with shared mutable state gets, you first need to isolate a test ordering that causes the failure. Then do a binary search (backing up from the beginning) to see which test causes the pollution. One client I worked on had a handy script to do this for their test suite. I can imagine pathological cases that depend on 3 or more tests interacting with each other in a strange way that breaks the binary search approach, but in practice it's not something I've seen.

Test pollution is one of those rare but annoying things that loses developer-days all the time, both for the people trying to track the problem down and for the people waiting for a green build to be able to merge their code to the master branch. If we take a bit more care when interacting with shared state, both in our test suites and production code, we'll save ourselves quite a bit of pain and frustration.

by Colin Jones at October 22, 2014 12:00 AM

October 21, 2014

QuantOverflow

Wholesale credit risk management

Trying to read up on "Wholesale credit risk ", can't find any useful references, why the emphasis on wholesale? - Any help greatly appreciated.

by mandytaylor at October 21, 2014 11:43 PM

Wes Felter

"Mute and ignore, while arguably unavoidable for large worldwide communities, are actively dangerous..."

“Mute and ignore, while arguably unavoidable for large worldwide communities, are actively dangerous for smaller communities.”

- Jeff Atwood

October 21, 2014 11:41 PM

/r/compsci

Probabilistic Method for Generating a Rhyme Scheme (Song Lyric Generator)

Continuing from this thread, I'm currently working on a project that "learns" how to generate a set of song lyrics by analyzing a set of existing lyrics and using Python's NLTK.

Currently, I'm able to detect rhymes and rate the "quality" of the rhyme based on the number of syllables they have in common, but I'm having trouble finding a probabilistic method for detecting/predicting rhyme scheme. I've done a lot of Googling, but none of the probabilistic methods I've seen seem to be a good fit.

Any help is appreciated, or if anyone can just point me in the right direction that would help greatly as well!

Thank you!

submitted by shawnadelic
[link] [2 comments]

October 21, 2014 11:39 PM

TheoryOverflow

Necessity of a Turing machine for a given problem in order to reduce it to another

I found it surprising that a certain type of reduction hasn't been flagged anywhere (except in Cook's original 1971 proof). Yes, there are Cook reductions (also known as Turing reduction), and the Karp reduction... but none of them address the issue of "how" an instance X of Problem A is transformed to an instance Y of Problem B.

Let us say:

Type-1 reduction: These are transformations or functions that map X to Y, where a Turing machine for Problem A is necessary, as in Cook's 1971 proof, to obtain instance Y.... Cook did this in his proof, but nobody else seems to have used this.... It is the encoding of the TM steps that gave Steve Cook the SAT instance.

(I guess for most of the subsequent reductions after Cook's 1971 proof, the above type was "not" necessary.)

Type-2 reduction: This is the usual reduction that you see in the literature... Example, as in Chapter 3 of Garey and Johnson... a Turing machine for Problem A is "not" necessary to do the transformation from X to Y... Karp reduction is an example of this type.

Is there anything in the literature that flags the difference between Type 1 and Type 2?

It looks like for some A-B problem pairs, the usual (type 2) is very difficult or perhaps even impossible, and type-1 is the only possible way.

I think type-2 is what you really want, if you want to prove that A reduces to B... As for the first type, it's too "weak" -- what's the point in doing it if you already have an algorithm for A? (except when you are proving completeness)

Thanks for your response.

by Martin Seymour at October 21, 2014 11:27 PM

/r/netsec

StackOverflow

Play2 form attributes with - in them "value - is not a member of Symbol"

I've just got started with Play2 and I'm struggling with scala.

In a view I've got this simple form helper to create a news item.

@textarea(
  newsItemForm("content"),
  '_label -> "Content",
  'rows -> 3,
  'cols -> 50,
)

Now I'd like to add a data-wysiwyg to the attributes, but since it contains a - scala complains about - not being a member of Symbol.

since ' is just a nice way of writing Symbol("data-wysiwyg") I can get it working, but then my views will look ugly with some attributes beeing specified with Symbol and others with '

My question is: is there a way to use the scala ' notation for html5 data- attributes?

by Leon Radley at October 21, 2014 11:14 PM

Scala replacement for Arrays.binarySearch?

Is there a replacement in Scala for Java's int Arrays.binarySearch(Object[] array, object)?

The problem is that Scala's Arrays are not covariant, so I would have to cast my stringArray: Array[String] like this first:

stringArray.asInstanceOf[Array[Object]]

Is there a better solution?

by soc at October 21, 2014 11:00 PM

/r/emacs

Anyone having issues with js-mode in 24.4?

The indentation seems to be trying to be too smart since I updated. Here's an example:

http://imgur.com/2j2PKoo

Whatever your opinion of the leading comma, I should at least be able to do it! What's changed?

submitted by harumphfrog
[link] [2 comments]

October 21, 2014 10:51 PM

StackOverflow

Automating Development Environment Setup

I've just started work at a new company and I am looking to automate as much as possible their process to setup a development environment for new starters on their computers.

To setup a machine with a working development environment involves:

  1. Checking out 4 different projects
  2. Invoking maven to build and install those projects
  3. Starting JBoss fuse
  4. Running various windows bat files
  5. Staring JBoss Portal

At the moment I am considering writing a script in Scala to do the above relying heavily on scala.sys.process. I am not too clued up on sbt at the moment and was wondering if that is better suited for this type of task or am I on the right track with writing my own custom setup script in scala.

by I.K. at October 21, 2014 10:46 PM

Getting TypeDoesNotMatch on Timestamp field when inserting using Play Framework with Postgres

I'm trying to insert data into a table called users. I'm only passing a value for the name field and this exception pops up.

I'm not even passing any Timestamp in the parameter.

The data still gets inserted into the database even if this happens. Why though?

Here is the error I'm getting: [RuntimeException: TypeDoesNotMatch(Cannot convert 2014-10-21 17:41:41.982: class java.sql.Timestamp to Long for column ColumnName(users.joined,Some(joined)))]

Here's the code:

DB.withConnection { implicit conn =>
  val id: Option[Long] =
    SQL("insert into pinglet.users (name) VALUES ('jel124')")
      .executeInsert()
    outString += id.getOrElse("nuffin'")
}

Info

joined is a field of data type timestamp with time zone.

My scala version is 2.11.1

java version is 1.8.0_25

My postgres jdbc driver is 9.3-1102-jdbc41

by rakista112 at October 21, 2014 10:44 PM

CompsciOverflow

Algorithm Introduction [on hold]

From what I have found out so far, Introduction to Algorithms by Cormen is the algorithm equivalent of the GOF Design Patterns book.

However, can anyone recommend a more introductory book please?

by Ciaran Martin at October 21, 2014 10:40 PM

StackOverflow

Clojure: adding to a map

If I have a vector of maps

(def v [{:key1 "value 1" :key2 "value2"} {:key1 "value 3" :key2 "value4"}])

and a map

(def m {:key3 "value2" :key4 "value5"})

How to add map m to all the maps in vector v where the values of 2 given keys (in this case key2 and key3) are equal?

The expected result would be this:

[{:key1 "value 1" :key2 "value2" :key3 "value2" :key4 "value5"} {:key1 "value 3" :key2 "value4"}]

by Vesna at October 21, 2014 10:39 PM

/r/compilers

Using a backend vs developing my own

I'm working on a new programming language intended to compile down to native binaries. I'm wondering whether I should look to integrate it into an existing compiler backend, or continue with my own - also, what criteria does this decision depend on?

The language will be a mix of high level and low level features - offering high levels of abstraction using the functional paradigm, while also allowing low level machine access. I'm creating the language mostly because it's fun, but I'd like it to be usable.

What would be the benefits of each approach?

submitted by jczz
[link] [3 comments]

October 21, 2014 10:35 PM

CompsciOverflow

List the edges (vertex pairs) of a minimum spanning tree for this graph in the order they would be chosen by Prim's algorithm

enter image description here

List the edges (vertex pairs) of a minimum spanning tree for this graph in the order they would be chosen by Prim's algorithm Please help me to understand and complete this. I would very much appreciate it.

by user3255549 at October 21, 2014 10:32 PM

Lobsters

StackOverflow

Generic transform/fold/map over tuple/hlist containing some F[_]

I recently asked Map and reduce/fold over HList or tuple of scalaz.Validation and got a great answer as to how to transform a fixed sized tuple of Va[T] (which is an alias for scalaz.Validation[String, T]) into a scalaz.ValidationNel[String, T]. I've since then been studying Shapeless and type level programming in general to try to come up with a solution that works on tuples of any size.

This is what I'm starting out with:

import scalaz._, Scalaz._, shapeless._, contrib.scalaz._, syntax.std.tuple._

type Va[A] = Validation[String, A]

// only works on pairs of Va[_]
def validate[Ret, In1, In2](params: (Va[In1], Va[In2]))(fn: (In1, In2) => Ret) = {
  object toValidationNel extends Poly1 {
    implicit def apply[T] = at[Va[T]](_.toValidationNel)
  }
  traverse(params.productElements)(toValidationNel).map(_.tupled).map(fn.tupled)
}

so then validate is a helper I call like this:

val params = (
  postal  |> nonEmpty[String]("no postal"),
  country |> nonEmpty[String]("no country") >=> isIso2Country("invalid country")
)

validate(params) { (postal, country) => ... }

I started out by taking any Product instead of a pair and constraining its contents to Va[T]:

// needs to work with a tuple of Va[_] of arbitrary size
def validateGen[P <: Product, F, L <: HList, R](params: P)(block: F)(
  implicit
  gen: Generic.Aux[P, L],
  va:  UnaryTCConstraint[L, Va],
  fp:  FnToProduct.Aux[F, L => R]
) = ???

I do have the feeling that simply adding the constraint only makes sure the input is valid but doesn't help at all with implementing the body of the function but I don't know how to go about correcting that.

traverse then started complaining about a missing evidence so I ended up with:

def validateGen[P <: Product, F, L <: HList, R](params: P)(block: F)(
  implicit
  gen: Generic.Aux[P, L],
  va:  UnaryTCConstraint[L, Va],
  tr:  Traverser[L, toValidationNel.type],
  fp:  FnToProduct.Aux[F, L => R]
) = {
  traverse(gen.to(params): HList)(toValidationNel).map(_.tupled).map(block.toProduct)
}

The compiler however continued to complain about a missing Traverser[HList, toValidationNel.type] implicit parameter even though it's there.

Which additional evidence do I need to provide to the compiler in order for the traverse call to compile? Has it got to do with the UnaryTCConstraint not being declared in a way that is useful to the traverse call, i.e. it cannot apply toValidationNel to params because it cannot prove that params contains only Va[_]?

P.S. I also found leftReduce Shapeless HList of generic types and tried to use foldRight instead of traverse to no avail; the error messages weren't too helpful when trying to diagnose which evidence the compiler was really lacking.

UPDATE:

As per what lmm has pointed out, I've removed the cast to HList, however, the problem's now that, whereas in the non-generic solution I can call .map(_.tupled).map(block.toProduct) on the result of the traverse call, I'm now getting:

value map is not a member of shapeless.contrib.scalaz.Out

How come it's possible that it was possible on the result of the traverse(params.productElements)(toValidationNel) call and not the generic traverse?

UPDATE 2:

Changing the Traverser[...] bit to Traverser.Aux[..., Va[L]] helped the compiler figure out the expected result type of the traversal, however, this only makes the validateGen function compile successfully but yields another error at the call site:

[error] could not find implicit value for parameter tr: shapeless.contrib.scalaz.Traverser.Aux[L,toValidationNel.type,Va[L]]
[error]     validateGen(params) { (x: String :: String :: HNil) => 3 }
[error]                         ^

I'm also getting the feeling here that the UnaryTCConstraint is completely unnecessary — but I'm still too new to Shapeless to know if that's the case.

by Erik Allik at October 21, 2014 09:48 PM

CompsciOverflow

Roucairal and Carvalho's Mutual Exclusion algorithm [on hold]

How to show that Roucairal and Carvalho's Mutual Exclusion algorithm is not fair. i.e. requests are not satisfied in the order that they were made.

by user2961121 at October 21, 2014 09:39 PM

StackOverflow

Clojure: extracting data from xml using clj-xpath

I'm using clj-xpath library for extracting data from xml that comes from a API. As output I'm expecting a mapa of tags and its contents. I have a function that works, here's the code snippet:

(use 'clj-xpath.core)

(def data-url
 str "http://api.eventful.com/rest/events/search?" "app_key=4H4Vff4PdrTGp3vV&" "keywords=music&location=New+York&date=Future")

(defn create-keys [tags]
(into [] (map keyword tags)))

(defn tag-fn [tag] (partial $x:text tag))

(defn func-contents [tags root-tag data-url] 
 (map (apply juxt (map tag-fn tags)) (take 2 ($x root-tag (xml->doc (slurp data-url ))))))

(defn create-contents [tags root-tag data-url] 
(map #(zipmap (create-keys tags) %) (func-contents tags root-tag data-url)))

However, when I call create-contents it doesn't add the keys

(func-contents ["url" "title"] "//event" data-url)

(["http://newyorkcity.eventful.com/events/chamber-music-society-lincoln-center-mixed-wind-/E0-001- 067617553-1?utm_source=apis&utm_medium=apim&utm_campaign=apic" "Chamber Music Society of Lincoln Center - MIxed Winds"] ["http://newyorkcity.eventful.com/events/evil-dead-musical-/E0-001-070989019-4?utm_source=apis&utm_medium=apim&utm_campaign=apic" "Evil Dead The Musical"])

And when I only eval its body it gives the expected result.

(map #(zipmap (create-keys ["url" "title"]) %) (func-contents ["url" "title"] "//event" data-url))
({:title "Chamber Music Society of Lincoln Center - MIxed Winds", :url    "http://newyorkcity.eventful.com/events/chamber-music-society-lincoln-center-mixed-wind-/E0-001-067617553-1?utm_source=apis&utm_medium=apim&utm_campaign=apic"} {:title "Evil Dead The Musical", :url "http://newyorkcity.eventful.com/events/evil-dead-musical-/E0-001-070989019-4?utm_source=apis&utm_medium=apim&utm_campaign=apic"})

Any ideas? Probably the problem is in create-keys function, but I need it because I want a general functon for any set of tags.

by Vesna at October 21, 2014 09:39 PM

Pattern match on manifest instances of sealed class

Given classes

sealed abstract class A

case class B(param: String) extends A

case class C(param: Int) extends A

trait Z {}

class Z1 extends Z {}

class Z2 extends Z {}

def zFor[T <: A : Manifest]: Option[Z] = {
  val z = manifest[T].erasure
  if (z == classOf[B]) {
    Some(new Z1)
  } else
  if (z == classOf[C]) {
    Some(new Z2)
  } else {
    None
  }
}

I think the problem with pattern matching here is impossibility to build pattern matching table in the bytecode. Is there any workaround over this problem? May be I can use some Int generated in Manifest by compiler?

by jdevelop at October 21, 2014 09:38 PM

TheoryOverflow

Randomized identity-testing for high degree polynomials?

Let $f$ be an $n$-variate polynomial given as an arithmetic circuit of size poly$(n)$, and let $p = 2^{\Omega(n)}$ be a prime.

Can you test if $f$ is identically zero over $\mathbb{Z}_p$, with time $\mbox{poly}(n)$ and error probability $\leq 1-1/\mbox{poly}(n)$, even if the degree is not a priori bounded? What if $f$ is univariate?

Note that you can efficiently test if $f$ is identically zero as a formal expression, by applying Schwartz-Zippel over a field of size say $2^{2|f|}$, because the maximum degree of $f$ is $2^{|f|}$.

by user94741 at October 21, 2014 09:37 PM

StackOverflow

Advantage of asynchronous libraries

I was going through the twitter finagle library which is an asynchronous service framework in scala and I have some question regarding asynchronous libraries in general.

So as I understand, the advantage of an synchronous library using a callback is that the application thread gets free and the library calls the callback as soon as the request is completed over the network. And in general the application threads might not have a 1:1 mapping with the library thread.

  1. The service call in the library thread is blocking right?
  2. If that's the case then we are just making the blocking call in some other thread. This makes the application thread free but some other thread is doing the same work. Can't we just increase the number of application threads to have that advantage?

It's possible that I mis-understand how the asynchronous libraries are implemented in Java/Scala or JVM in general. Can anyone help me understand how does this work?

by piyush at October 21, 2014 09:36 PM

TheoryOverflow

What's the correlation between treewidth and instance hardness for random 3-SAT?

This recent paper from FOCS2013, Strong Backdoors to Bounded Treewidth SAT by Gaspers and Szeider talks about the link between the treewidth of the SAT clause graph and instance hardness.

For random 3-SAT, i.e. 3-SAT instances chosen at random, what is the correlation between treewidth of the clause graph and instance hardness?

"Instance hardness" can be taken as "hard for a typical SAT solver", i.e. running time.

I am looking for either theoretical or empirical style answers or references. To my knowledge, there do not seem to be empirical studies on this. I am aware there are somewhat different ways to build SAT clause graphs, but this question is not focused on the distinction.

Maybe a natural closely related question is how treewidth of the clause graph relates to the 3-SAT phase transition.

by vzn at October 21, 2014 09:30 PM

CompsciOverflow

Ricart and Agarwal's Mutual Exclusion algorithm [on hold]

How to show that in Ricart and Agarawal's mutual exclusion algorithm, requests are satisfied in the order of their timestamps?

by user2961121 at October 21, 2014 09:29 PM

/r/freebsd

Any idea why the FreeBSD RC3 builds are delayed?

The release schedule indicates that they should have started four days ago. Just curious.

submitted by good_names_all_taken
[link] [2 comments]

October 21, 2014 09:25 PM

StackOverflow

Converting xsd to clojure.data.xml.Element loses data when attribs have colon in name

I’m having trouble reading an xsd file. When it converts it to an #clojure.data.xml.Element, it loses some attributes, such as xmlns:tns – how do I convert it correctly?

Ultimately I want to save xml to a noSQL database (I’m using mongodb with monger as it has good support) and then output back as the same xml later.

Here a sample xsd, but I want to be able to upload any xsd/xslt/xml file:

<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" 
           xmlns:tns="http://tempuri.org/PurchaseOrderSchema.xsd" 
           targetNamespace="http://tempuri.org/PurchaseOrderSchema.xsd" 
           elementFormDefault="qualified">
 <xsd:element name="PurchaseOrder" type="tns:PurchaseOrderType"/>
 <xsd:complexType name="PurchaseOrderType">
  <xsd:sequence>
   <xsd:element name="ShipTo" type="tns:USAddress" maxOccurs="2"/>
   <xsd:element name="BillTo" type="tns:USAddress"/>
  </xsd:sequence>
  <xsd:attribute name="OrderDate" type="xsd:date"/>
 </xsd:complexType>

 <xsd:complexType name="USAddress">
  <xsd:sequence>
   <xsd:element name="name"   type="xsd:string"/>
   <xsd:element name="street" type="xsd:string"/>
   <xsd:element name="city"   type="xsd:string"/>
   <xsd:element name="state"  type="xsd:string"/>
   <xsd:element name="zip"    type="xsd:integer"/>
  </xsd:sequence>
  <xsd:attribute name="country" type="xsd:NMTOKEN" fixed="US"/>
 </xsd:complexType>
</xsd:schema>

Here’s a sample of the code I use to read it:

(ns example
  (:use compojure.core)
(:require [clojure.data.xml :as xml] ))

 (defn uploadXMLfile
  "Upload xml file"
  [file]
    (let [xmlstr (slurp (:tempfile file))]
      (xml/parse-str xmlstr :supporting-external-entities true 
                            :namespace-aware true 
                            :replacing-entity-references true)))

The output I get is:

#clojure.data.xml.Element{:tag :schema, :attrs {:targetNamespace http://tempuri.
org/PurchaseOrderSchema.xsd, :elementFormDefault qualified}, :content (#clojure.
data.xml.Element{:tag :element, :attrs {:name PurchaseOrder, :type tns:PurchaseO
rderType}, :content ()} #clojure.data.xml.Element{:tag :complexType, :attrs {:na
me PurchaseOrderType}, :content (#clojure.data.xml.Element{:tag :sequence, :attr
s {}, :content (#clojure.data.xml.Element{:tag :element, :attrs {:name ShipTo, :
type tns:USAddress, :maxOccurs 2}, :content ()} #clojure.data.xml.Element{:tag :
element, :attrs {:name BillTo, :type tns:USAddress}, :content ()})} #clojure.dat
a.xml.Element{:tag :attribute, :attrs {:name OrderDate, :type xsd:date}, :conten
t ()})} #clojure.data.xml.Element{:tag :complexType, :attrs {:name USAddress}, :
content (#clojure.data.xml.Element{:tag :sequence, :attrs {}, :content (#clojure
.data.xml.Element{:tag :element, :attrs {:name name, :type xsd:string}, :content
 ()} #clojure.data.xml.Element{:tag :element, :attrs {:name street, :type xsd:st
ring}, :content ()} #clojure.data.xml.Element{:tag :element, :attrs {:name city,
 :type xsd:string}, :content ()} #clojure.data.xml.Element{:tag :element, :attrs
 {:name state, :type xsd:string}, :content ()} #clojure.data.xml.Element{:tag :e
lement, :attrs {:name zip, :type xsd:integer}, :content ()})} #clojure.data.xml.
Element{:tag :attribute, :attrs {:name country, :type xsd:NMTOKEN, :fixed US}, :
content ()})})}

by user619882 at October 21, 2014 09:23 PM

Strings concatenation in Spark SQL query

I'm experimenting with Spark and Spark SQL and I need to concatenate a value at the beginning of a string field that I retrieve as output from a select (with a join) like the following:

val result = sim.as('s)   
    .join(
        event.as('e),
        Inner,
        Option("s.codeA".attr === "e.codeA".attr))   
    .select("1"+"s.codeA".attr, "e.name".attr)  

Let's say my tables contain:

sim:

codeA,codeB
0001,abcd
0002,efgh

events:

codeA,name
0001,freddie
0002,mercury

And I would want as output:

10001,freddie
10002,mercury

In SQL or HiveQL I know I have the concat function available, but it seems Spark SQL doesn't support this feature. Can somebody suggest me a workaround for my issue?

Thank you.

Note: I'm using Language Integrated Queries but I could use just a "standard" Spark SQL query, in case of eventual solution.

by erond at October 21, 2014 09:17 PM

How to run `lein test` from the REPL?

lein test is convenient in that it has an elaborate test selection mechanism, and without any additional input runs all tests defined in the current project's namespaces; however, it boots up a new JVM every time you run it making it too slow. Running it from the REPL would borrow it better to REPL-driven development.

How to run lein test from the REPL, without booting a new JVM?

by Dominykas Mostauskis at October 21, 2014 09:16 PM

Get data from database in form of JSON in PLAY(Scala)

I am new to Play 2.1 and Scala. I am from node.js background & it returns result from database directly in JSON form. What I want is to get the data from database in JSON form in Play(Scala). I have tried Json.toJson but it shows error of deserializer or something. Can anybody find me a solution to this problem with a model & controller description. Thanks in advance.

I am using Mysql database. Here is model code...

import anorm.SqlParser._
import play.api.db.DB

//class definition
case class     Data(Date_Time_id:BigInteger,Details:String,Image:Strig,Status:Boolean,Type:String)

object Model{
    def getDetails(Person_id:Long):Map[BigInteger,Data]={

    DB.withConnection{ implicit c=> 

val result=SQL("""select Date_Time_id,Details,Image,
    ,Status,Type from table1 where  Person_id={perId} 
    """).on("perId"->Person_id)

//mapping result
val detailss=result().map(row=>
        row[BigInteger]("Date_Time_id")->row[BigInteger]("Date_Time_id"),row[String]("Details"),row[String]("Image"),row[Boolean]("Status"),row[String]("Type"))).toMap
    return detailss

}
}

I am calling it from controller like:

var getResult=Model.getDetails(some Id)

by Ravi Mamain at October 21, 2014 09:15 PM

/r/compsci

time complexity question

suppose you're given a function that has two nested for loops (one within the other). The outer loop runs over n things while the inner only runs over a much smaller subset of n. What would the time complexity be?

Since this question may be a bit ambiguous I'll give an example via sudo code. These functions will reverse a given string, keeping spaces in tacked, then reverse each word to give the original string in reverse order.

def rev_word(word): for i in range(-1,-len(word)-1,-1): #loops through word backwards return reverse word def reverse_string(string): rev_string = string.reverse #this is done with a for loop as well but not important for the question for i in range(0,len(rev_string)): if index is at the end of a word: rev_string = rev_string[:start of word] + rev_word(word) + rev_string[index:] return rev_string 

theres an edge case here that im ignoring since its not important for the question. But my question is simply the outer loop will loop through the entire string; the inner however only loops through every word (reversing them).

When you see a nested loop you think n2 (or whatever nested number you're dealing with), but this one doesn't seem to fit the usual definition.

Can you give an answer for this, thanks.

My thoughts: since we're dealing with big O we're looking at worst case. To me this would be if the entire string was a single word. Then the outer loop would go through every char of the word then the rev_word function would loop through it again making it n2 . Average case would be something much smaller and seems to be linear.

submitted by razeal113
[link] [3 comments]

October 21, 2014 09:11 PM

Planet FreeBSD

KDEConnect in PC-BSD — Remote Control Your Desktop From Your Android Phone

Hey guys check out our video on KDEConnect in PC-BSD on YouTube!  It’s an awesome new app that allows you to receive text messages, phone notifications, incoming calls notifications, media remote control, and more!

by Josh Smith at October 21, 2014 09:08 PM

StackOverflow

Erlang Towers of Hanoi

Currently stuck on trying to implement the towers of hanoi using a collection. I am trying to follow the example in Java using stacks http://www.sanfoundry.com/java-program-implement-solve-tower-of-hanoi-using-stacks/, but I am getting

-module(toh).
-export([begin/0]).
begin() -> 
  Pegs = 3,
  TowerA = createTowerA(N),
  TowerB = [],
  TowerC = [],
  move(Pegs, TowerA, TowerB, TowerC).

%fills Tower A with integers from Pegs to 1.
createTowerA(0) -> [];
createTowerA(N) when N > 0 ->
   [N] ++ createTowerA(N - 1).

%displays the towers
display(A, B, C) -> 
   io:format("~w\t~w\t~w~n", [A, B, C]).

move(Pegs, TowerA, TowerB, TowerC) -> 
  if Pegs > 0 ->
    move(Pegs, TowerA, TowerC, TowerB),
    Temp = lists:last(TowerA),
    NewTowerC = C ++ Temp,
    NewTowerA = lists:sublist(TowerA, length(TowerA) - 1),
    display(NewTowerA, B, NewTowerC),
    move(Pegs - 1, B, NewTowerA, NewTowerC);
  end

When I try running the code, I get this error.

{"init terminating in do_boot",{undef,[{toh,begin,[],[]},{init,begin_i
t,1,[{file,"init.erl"},{line,1057}]},{init,begin_em,1,[{file,"init.erl"},{line,1
037}]}]}}

Crash dump was written to: erl_crash.dump
init terminating in do_boot ()

Can someone see why this is not working? I'm just trying to follow the sanfoundry example.

by user3749140 at October 21, 2014 09:07 PM

How can I write a recursive polymorphic function with Shapeless

I can write a simple recursive polymorphic function:

object simpleRec extends Poly1 {
  implicit def caseInt = at[Int](identity)
  implicit def caseList[A, B](implicit ev: simpleRec.Case.Aux[A, B]) =
    at[List[A]](_.headOption.map(simpleRec))
}

This seems to largely do what I want; however, I seem to be getting a nonsensical result type:

scala> simpleRec(List.empty[List[Int]])
res3: Option[B] = None

scala> simpleRec(List(List(1)))
res4: Option[B] = Some(Some(1))

How can I make this give me values of Option[Option[Int]] rather than Option[B]? I expect I'm making some silly mistake here, but can't work out what it is.

by Huw at October 21, 2014 09:04 PM

How do I check where my code gets stuck in Erlang?

I'm trying to write a function that receives a list, finds the highest value integer in the list, and then divides all the other integers in the list by that value.

Unfortunately, my code gets stuck somewhere. If this were python for example I could easily write a couple different "print"s and see where it gets stuck. But how do you do that in Erlang?

Here is the code.

highest_value([], N) ->
    if
        N =:= 0 ->
            'Error! No positive values.'
    end,
    N;
highest_value([H|T], N) when H > N, H > 0 ->
    highest_value([T], H);
highest_value([_|T], N) ->
    highest_value([T], N).

divide(_L) -> [X / highest_value(_L, 0) || X <- _L].

by G.O.A.T at October 21, 2014 09:03 PM

/r/netsec

StackOverflow

/r/emacs

24.4 broke my zenburn theme modification

I upgraded to 24.4 today, and a piece of code I had to modify zenburn-theme from melpa doesn't work anymore.

Zenburn theme and whitespace mode together make tabs quite a bright pink which sticks out a lot. And while I want to see the difference between tabs and spaces, I don't want tabs to be distracting. So I had this piece of code just to make tabs a normalish colour:

(eval-after-load 'zenburn '(zenburn-with-color-variables (custom-theme-set-faces 'zenburn `(whitespace-tab ((t (:background ,zenburn-bg+1))))))) (load-theme 'zenburn t) 

does anybody have any idea why this wouldn't work anymore, and how to fix it? cheers.

(I don't think the problem lies with the eval-after-load, because executing the bit inside that well after emacs has loaded also does not work.)

submitted by kill_jester
[link] [2 comments]

October 21, 2014 08:46 PM

/r/clojure

StackOverflow

Scala 2.11 override things form an abstract class

I have a question concerning Scala override (as my title suggests)

Now I have the following classes/traits:

trait FSM {def transitionGraph:Map[String,(JsValue,FSM)]
abstract class AClass: FSM { def transitionGraph }

class Class extends AClass{ override def transitionGraph ... } <-- Wont work

trait OverrideTrait extends AClass { abstract override def transitionGraph } <-- works
class NewClass extends OverrideTrait { } <--- Works, I can use the overridden transitionGraph

My question is: Why can I not override things from an abstract class. Is it because I am never allowed to instantiate an abstract class. Thus the behavior :

val AClass class = new Class 

is never allowed to happen?

Thanks.

by Marc HPunkt at October 21, 2014 08:44 PM

Map Shapeless hlist type F[T1] :: ... :: F[Tn] :: HNil to the type T1 :: ... :: Tn :: HNil

I'm building a generic function that takes in a HList of the form F[T1] :: ... :: F[Tn] :: HNil, converts that into a F[T1 :: ... :: Tn :: HNil] and then needs to pass that into a block that was passed in. However, in order for that to work, I need to extract the HList type in that F[_]. I've found something remotely relevant under Shapeless' hlistconstraints:

/**
 * Type class witnessing that every element of `L` has `TC` as its outer type constructor. 
 */
trait UnaryTCConstraint[L <: HList, TC[_]]

...but this can only be used to verify that the hlist passed in is indeed made up of just F[_]; there seems to be no way however to extract that _ bit so to say to a hlist of its own.

Where should I be looking to find something to do the job? Or should I just not expect to find anything out of the box and instead build the type computation myself?

Disclosure: this question is an auxilliary to Generic transform/fold/map over tuple/hlist containing some F[_] but is nevertheless at least as useful as a standalone question, in my opinion.

by Erik Allik at October 21, 2014 08:39 PM

scala Enumeration with constructor and lookup table

I saw the following solution for an enum somewhere

object Day {
  val values = new ArrayBuffer[Day]
  case class Day(name:String) {
    values += this
  }
  val MONDAY = new Day("monday")
  val TUESDAY = new Day("tuesday")
}

This demonstrates what I am trying to go for EXCEPT there is a var hidden in the ArrayBuffer....that is kind of icky.

What I really want is a val lookupTable = Map() where when a request comes in, I can lookup "monday" and translate it to my enum MONDAY and use the enum throughout the software. How is this typically done. I saw sealed traits but didn't see a way to automatically make sure that when someone adds a class that extends it, that it would automatically be added to the lookup table. Is there a way to create a scala enum with lookup table?

A scala Enumeration seems close as it has a values() method but I don't see how to pass in the strings representing the days which is what we receive from our user and then translate that into an enum.

thanks, Dean

by Dean Hiller at October 21, 2014 08:37 PM

Scala pattern-matching confusion

I start learning Scala and I don't quite understand some behaviors of pattern-matching. Can anyone explain to me why the first case works but the second case doesn't work?

1

def getFirstElement(list: List[Int]) : Int = list match {
    case h::tail => h
    case _ => -1
}

Scala> getFirstElement(List(1,2,3,4))
res: Int = 1

Scala> 1 :: List(1,2)
res: List[Int] = List(1, 1, 2)

2

def getSumofFirstTwoElement(list: List[Int]): Int = list match {
    case List(a: Int, b: Int)++tail => a + b
    case _ => -1
}

<console>:11: error: not found: value ++

Scala> List(1,2) ++ List(3,4,5)
res: List[Int] = List(1, 2, 3, 4, 5)

by Pierrew at October 21, 2014 08:29 PM

Access object inside of a companion object in scala

I have the following:

case class Location(name: String, level: Location.Level)

object Location {
  trait Level
  case object City extends Level
  case object State extends Level
}

If I try and access City (from another source file), I get an error saying something like

found   : model.Location.City.type
required: model.Level

I can think of some work-arounds, but I'm wondering if there's a way to keep my names the same i.e. I'd like to access City by typing Location.City.

EDIT:

I'm accessing it like this:

import the.package.name._
Location.City

by three-cups at October 21, 2014 08:26 PM

/r/netsec

/r/compsci

What to do if you forgot math?

Hello. I'm interested in Computer Science, but I have forgotten a lot of things(mostly math) in school. Mostly from high school. Should I review everything in high school? Or should I study specific things?

I have three months from now. Should I study Khan's academy or is there any alternative website that I didn't know of?

Thank you.

Edit: I forgot to mention that I have three choices. Ryerson Uni, York uni or University of Ontario Institute of Technology. Which of them are the better choices?

submitted by cefaqu
[link] [5 comments]

October 21, 2014 07:59 PM

CompsciOverflow

How to computer the minimum operations required to perform chained matrix multiplication [on hold]

A1 x A2 x A3 x A4

Here A1 is 4x 5 , A2 is 5 x 10, A3 is 10 X 7 and A4 is 7 x 3

So far this is what i have for the matrix m [][] )enter image description here

Please help me to understand and complete this problem.

by user3255549 at October 21, 2014 07:47 PM

StackOverflow

How can one pass the Scala version string to a code generator for SBT?

When using code generators with SBT, one uses constructs like

def genFile(out: File): Seq[File] = {
  val file = new File(out, "generated.scala")
  // Add stuff to file
  Seq(file)
}

(sourceGenerators in Compile) <+= (sourceManaged in Compile) map (genFile _)

If your generator needs the Scala version string, how do you pass it in? Using scalaVersion.value in genFile results in an error.

by Rex Kerr at October 21, 2014 07:46 PM

Apache Spark MLLib - Running KMeans with IDF-TF vectors - Java heap space

I'm trying to run a KMeans on MLLib from a (large) collection of text documents (TF-IDF vectors). Documents are sent through a Lucene English analyzer, and sparse vectors are created from HashingTF.transform() function. Whatever the degree of parrallelism I'm using (through the coalesce function), KMeans.train always return an OutOfMemory exception below. Any thought on how to tackle this issue ?

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:138)
at scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:136)
at breeze.linalg.Vector$class.toArray(Vector.scala:80)
at breeze.linalg.SparseVector.toArray(SparseVector.scala:48)
at breeze.linalg.Vector$class.toDenseVector(Vector.scala:75)
at breeze.linalg.SparseVector.toDenseVector(SparseVector.scala:48)
at breeze.linalg.Vector$class.toDenseVector$mcD$sp(Vector.scala:74)
at breeze.linalg.SparseVector.toDenseVector$mcD$sp(SparseVector.scala:48)
at org.apache.spark.mllib.clustering.BreezeVectorWithNorm.toDense(KMeans.scala:422)
at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:285)
at org.apache.spark.mllib.clustering.KMeans$$anonfun$initKMeansParallel$1.apply(KMeans.scala:284)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108)
at org.apache.spark.mllib.clustering.KMeans.initKMeansParallel(KMeans.scala:284)
at org.apache.spark.mllib.clustering.KMeans.runBreeze(KMeans.scala:143)
at org.apache.spark.mllib.clustering.KMeans.run(KMeans.scala:126)
at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:338)
at org.apache.spark.mllib.clustering.KMeans$.train(KMeans.scala:348)

by Antoine Amend at October 21, 2014 07:38 PM

/r/scala

StackOverflow

Injecting dependencies in tests in Play framework with scaldi

I'm looking for a way to inject a dependency into a Test (in /tests/models/) that looks like following:

class FolderSpec(implicit inj: Injector) extends Specification with Injectable{

  val folderDAO = inject [FolderDAO]

  val user = User(Option(1), LoginInfo("key", "value"), None, None)

  "Folder model" should {

    "be addable to the database" in new WithFakeApplication {
      folderDAO.createRootForUser(user)
      val rootFolder = folderDAO.findUserFolderTree(user)
      rootFolder must beSome[Folder].await
    }

  }
}

Where

abstract class WithFakeApplication extends WithApplication(FakeApplication(additionalConfiguration = inMemoryDatabase()))

/app/modules/WebModule:

class WebModule extends Module{
  bind[FolderDAO] to new FolderDAO
}

/app/Global:

object Global extends GlobalSettings with ScaldiSupport with SecuredSettings with Logger {
  def applicationModule = new WebModule :: new ControllerInjector
}

But at compilation time I have following stack trace:

[error] Could not create an instance of models.FolderSpec
[error]   caused by java.lang.Exception: Could not instantiate class models.FolderSpec: argument type mismatch
[error]   org.specs2.reflect.Classes$class.tryToCreateObjectEither(Classes.scala:93)
[error]   org.specs2.reflect.Classes$.tryToCreateObjectEither(Classes.scala:207)
[error]   org.specs2.specification.SpecificationStructure$$anonfun$createSpecificationEither$2.apply(BaseSpecification.scala:119)
[error]   org.specs2.specification.SpecificationStructure$$anonfun$createSpecificationEither$2.apply(BaseSpecification.scala:119)
[error]   scala.Option.getOrElse(Option.scala:120)

Sadly, I didn't find anything on the matter in Scaldi documentation.

Is there a way to inject things in tests?

by Mironor at October 21, 2014 07:14 PM

AWS

CloudWatch Update - Enhanced Support for Windows Log Files

Earlier this year, we launched a log storage and monitoring feature for AWS CloudWatch. As a quick recap, this feature allows you to upload log files from your Amazon Elastic Compute Cloud (EC2) instances to CloudWatch, where they are stored durably and easily monitored for specific symbols or messages.

The EC2Config service runs on Microsoft Windows instances on EC2, and takes on a number of important tasks. For example it is responsible for uploading log files to CloudWatch. Today we are enhancing this service with support for Windows Performance Counter data and ETW (Event Tracing for Windows) logs. We are also adding support for custom log files.

In order to use this feature, you must enable CloudWatch logs integration and then tell it which files to upload. You can do this from the instance by running EC2Config and checking Enable CloudWatch Logs integration:

The file %PROGRAMFILES%\Amazon\Ec2ConfigService\Settings\AWS.EC2.Windows.CloudWatch.json specifies the files to be uploaded.

To learn more about how this feature works and how to configure it, head on over to the AWS Application Management Blog and read about Using CloudWatch Logs with Amazon EC2 Running Microsoft Windows Server.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at October 21, 2014 07:10 PM

StackOverflow

How to remove duplicate keys and merge other key-values of list using scala?

I am getting following List[JSONObject] structure as a output of some snippet-

List(List({
"groupName": "group1",
"maxSeverity": -1,
"hostCount": 3,
"members": [
    "192.168.20.11",
    "192.168.20.52",
    "192.168.20.53"
]
}),
List(),
List({
"groupName": "group1",
"maxSeverity": -1,
"hostCount": 2,
"members": [
    "192.168.20.20",
    "192.168.20.52"
]
}))

I want to merge whole output to form a list which contains - 1) group name

2) severity - which will be minimum from all list elements

3) hostcout - addition of hostcount from all list elements

4) members - similar array without duplicate values from all list elements.

So output will be somewhat like this-

List({
"groupName": "group1",
"maxSeverity": -1,
"hostCount": 5,
"members": [
    "192.168.20.11",
    "192.168.20.52",
    "192.168.20.53",
    "192.168.20.20",
    "192.168.20.52"
]
})

How do I merge whole list to a single list to get above mentioned output???

by user3322141 at October 21, 2014 07:08 PM

Halfbakery

DataTau

AWS

Speak to Amazon Kinesis in Python

My colleague Rahul Patil sent me a nice guest post. In the post Rahul shows you how to use the new Kinesis Client Library (KCL) for Python developers.

-- Jeff;


The Amazon Kinesis team is excited to release the Kinesis Client Library (KCL) for Python developers! Developers can use the KCL to build distributed applications that process streaming data reliably at scale. The KCL takes care of many of the complex tasks associated with distributed computing, such as load-balancing across multiple instances, responding to instance failures, checkpointing processed records, and reacting to changes in stream volume.

You can download the KCL for Python using Github, or PyPi.

Getting Started
Once you are familiar with key concepts of Kinesis and KCL, you are ready to write your first application. Your code has the following duties:

  1. Set up application configuration parameters.
  2. Implement a record processor.

The application configuration parameters are specified by adding a properties file. For example:

# The python executable script 
executableName = sample_kclpy_app.py

# The name of an Amazon Kinesis stream to process.
streamName = words

# Unique KCL application name
applicationName = PythonKCLSample

# Read from the beginning of the stream
initialPositionInStream = TRIM_HORIZON

The above example configures KCL to process a Kinesis stream called "words" using the record processor supplied in sample_kclpy_app.py. The unique application name is used to coordinate amongst workers running on multiple instances.

Developers have to implement the following three methods in their record processor:

initialize(self, shard_id)
process_records(self, records, checkpointer)
shutdown(self, checkpointer, reason)

initialize() and shutdown() are self-explanatory; they are called once in the lifecycle of the record processor to initialize and clean up the record processor respectively. If the shutdown reason is TERMINATE (because the shard has ended due to split/merge operations), then you must also take care to checkpoint all of the processed records.

You implement the record processing logic inside the process_records() method. The code should loop through the batch of records and checkpoint at the end of the call. The KCL assumes that all of the records have been processed. In the event the worker fails, the checkpointing information is used by KCL to restart the processing of the shard at the last checkpointed record.

# Process records and checkpoint at the end of the batch
    def process_records(self, records, checkpointer):
        for record in records:
            # record data is base64 encoded
            data = base64.b64decode(record.get('data'))
            ####################################       
            # Insert your processing logic here#
            ####################################       
       
        #checkpoint after you are done processing the batch  
        checkpointer.checkpoint()

The KCL connects to the stream, enumerates shards, and instantiates a record processor for each shard. It pulls data records from the stream and pushes them into the corresponding record processor. The record processor is also responsible for checkpointing processed records.

Since each record processor is associated with a unique shard, multiple record processors can run in parallel. To take advantage of multiple CPUs on the machine, each Python record processor runs in a separate process. If you run the same KCL application on multiple machines, the record processors will be load-balanced across these machines. This way, KCL enables you to seamlessly change machine types or alter the size of the fleet.

Running the Sample
The release also comes with a sample word counting application. Navigate to the amazon_kclpy directory and install the package.

$ python setup.py download_jars
$ python setup.py install

A sample putter is provided to create a Kinesis stream called "words" and put random words into that stream. To start the sample putter, run:

$ sample_kinesis_wordputter.py --stream words .p 1 -w cat -w dog -w bird

You can now run the sample python application that processes records from the stream we just created:

$ amazon_kclpy_helper.py --print_command --java <path-to-java> --properties samples/sample.properties

Before running the samples, you'll want to make sure that your environment is configured to allow the samples to use your AWS credentials via the default AWS Credentials Provider Chain.

Under the Hood - What You Should Know
KCL for Python uses KCL for Java. We have implemented a Java based daemon, called MultiLangDaemon that does all the heavy lifting. Our approach has the daemon spawn a sub-process, which in turn runs the record processor, which can be written in any language. The MultiLangDaemon process and the record processor sub-process communicate with each other over STDIN and STDOUT using a defined protocol. There will be a one to one correspondence amongst record processors, child processes, and shards. For Python developers specifically, we have abstracted these implementation details away and expose an interface that enables you to focus on writing record processing logic in Python. This approach enables KCL to be language agnostic, while providing identical features and similar parallel processing model across all languages.

Join the Kinesis Team
The Amazon Kinesis team is looking for talented Web Developers and Software Development Engineers to push the boundaries of stream data processing! Here are some of our open positions:

-- Rahul Patil

by Jeff Barr (awseditor@amazon.com) at October 21, 2014 06:59 PM

/r/emacs

Fix font rendering on OSX (Emacs 24.4)

I noticed Emacs 24.4 on OSX seems to break (or improve, YMMV) font rendering with some themes, including Zenburn.

To fix it, use your preferred OSX terminal and do this:

defaults write org.gnu.Emacs FontBackend ns 

Restart Emacs and everything should be fine again.

submitted by rhabarba
[link] [4 comments]

October 21, 2014 06:54 PM

QuantOverflow

Hedging bond with CDS of different maturity

Say I buy a 10-year bond with a notional of 100k. To hedge my credit risk entirely I could buy a 10-year CDS, also on a notional of 100k.

Now, if there are only 5-year CDS trading and no 10-year CDS, then I could still hedge the first 5 years of my bond, assuming that I do not "care" about the years 5 till 10 right now.

But the question is, on which notional should I buy the 5-year CDS. Intuitively I would say it should also be 100k. But I heard the following reasoning which I do not fully understand:

Making the simplying assumption that the risky annuities (RA) of the
two CDS contracts are 5 and 10 respectively one would need to buy a
5-year CDS with a notional of 200k. The reason being that (in its first
five years) a 5-year CDS with 2*100k notional and RA of 5 acts like a 
10-year CDS with notional 100k and RA 2*5.

Could somebody explain this behaviour? Is the reasoning right or wrong? Basically, how would one try to cope with the fact that 10-year CDS are not currently traded, but 5-year CDS are?

by Tom at October 21, 2014 06:41 PM

TheoryOverflow

Simple explanation of the O(n log n) algorithm for matrix chain multiplication

I've seen references to papers that talk of an algorithm that is able to compute the optimal order for multiplying matrices to reduce the number of operations (matrix chain multiplication), but does anyone know of a simple explanation that I can understand intuitively?

http://i.stanford.edu/pub/cstr/reports/cs/tr/81/875/CS-TR-81-875.pdf http://en.wikipedia.org/wiki/Matrix_chain_multiplication

by dhruvbird at October 21, 2014 06:41 PM

StackOverflow

shortcut to define parameterless functions in clojure

I am searching for a shortcut to define parameterless functions in clojure:

=> (def x (fn [] (println "test")))
#'x
=> (x)
test
nil
=> (def y (println "test"))
test
#'y
=> (y)
NullPointerException   core/eval2015 (form-init5842739937514010350.clj:1)

I would really like to avoid typing the fn []. I know about the lambda notation #() but it requires at least one parameter. I would use them in a GUI binding to handle button click events where I don't care about the event itself, I just need to know the button was clicked.

by Emmanuel Touzery at October 21, 2014 06:40 PM

/r/netsec

/r/emacs

sharing org documents online

Hello,

Recently, I started to use org-mode and as a result, I am writing most of my documents in it. To share my documents I am uploading them to gist.github.com.

Since github can parse org documents, this is a great approach to share my files. The problem is github org parser is buggy and cannot parse documents correctly.

Is there any other service that makes sharing org files possible?

submitted by yilmazhuseyin
[link] [6 comments]

October 21, 2014 06:37 PM

CompsciOverflow

All sets of interval coverage given a fixed interval size

Given an interval $I_n = [0,n]$ and a positive integer $m$ where $n,m \in \mathbb{N}$ and $m < n$, return all covering sets $S_m$ of $I_n$. A covering set $S_m$ of $I_n$ is a set of sub-intervals, where each sub-interval is of length $m$ and each point in $I_n$ is contained within at least 1 sub-interval.

For example given the interval $ I_6=[0,6]$ and $m=5$, a coverage set of $I_6$ could be $\{[0,4],[2,6]\}$ or $\{[0,4],[4,8]\}$ or even $\{[0,4],[1,5],[4,8]\}$. Note that each $I_n$ is isomorphic to a subset of $\mathbb{N}$, i.e. the interval $I_3 = [0,3]$ contains points $0,1,2$ and $3$.

I would also like to know if the algorithm could be optimised to not allow redundant covers, i.e. using the same example above the coverage set $\{[0,4],[1,5],[4,8]\}$ is a redundant version of $\{[0,4],[4,8]\}$ since the latter is a subset of the first.

by Tyler Durden at October 21, 2014 06:32 PM

AWS

Next Generation Genomics With AWS

My colleague Matt Wood wrote a great guest post to announce new support for one of our genomics partners.

-- Jeff;


I am happy to announce that AWS will be supporting the work of our partner, Seven Bridges Genomics, who has been selected as one of the National Cancer Institute (NCI) Cancer Genomics Cloud Pilots. The cloud has become the new normal for genomics workloads, and AWS has been actively involved since the earliest days, from being the first cloud vendor to host the 1000 Genomes Project, to newer projects like designing synthetic microbes, and development of novel genomics algorithms that work at population scale. The NCI Cancer Genomics Cloud Pilots are focused on how the cloud has the potential to be a game changer in terms of scientific discovery and innovation in the diagnosis and treatment of cancer.

The NCI Cancer Genomics Cloud Pilots will help address a problem in cancer genomics that is all too familiar to the wider genomics community: data portability. Today's typical research workflow involves downloading large data sets, (such as the previously mentioned 1000 Genomes Project or The Cancer Genome Atlas (TCGA)) to on-premises hardware, and running the analysis locally. Genomic datasets are growing at an exponential rate and becoming more complex as phenotype-genotype discoveries are made, making the current workflow slow and cumbersome for researchers. This data is difficult to maintain locally and share between organizations. As a result, genomic research and collaborations have become limited by the available IT infrastructure at any given institution.

The NCI Cancer Genomics Cloud Pilots will take the natural step to solve this problem, by bringing the computation to where the data is, rather than the other way around. The goal of the NCI Cancer Genomics Cloud Pilots are to create cloud-hosted repositories for cancer genome data that reside alongside the tools, algorithms, and data analysis pipelines needed to make use of the data. These Pilots will provide ways to provision computational resources within the cloud so that researchers can analyze the data in place. By collocating data in the cloud with the necessary interface, algorithms, and self-provisioned resources, these Pilots will remove barriers to entry, allowing researchers to more easily participate in cancer research and accelerating the pace of discovery. This means more life-saving discoveries such as better ways to diagnose stomach cancer, or the identification of novel mutations in lung cancer that allow for new drug targets.

The Pilots will also allow cancer researchers to provision compute clusters that change as their research needs change. They will have the necessary infrastructure to support their research when they need it, rather than make a guess at the resources that they will need in the future every time grant writing season starts. They will also be able to ask many more novel questions of the data, now that they are no longer constrained by a static set of computational resources.

Finally, the NCI Cancer Genomics Pilots will help researchers collaborate. When data sets are publicly shared, it becomes simple to exchange and share all the tools necessary to reproduce and expand upon another lab's work. Other researchers will then be able to leverage that software within the community, or perhaps even in an unrelated field of study, resulting in even more ideas be generated.

Since 2009, Seven Bridges Genomics has developed a platform to allow biomedical researchers to leverage AWS's cloud infrastructure to focus on their science rather than managing computational resources for storage and execution. Additionally, Seven Bridges has developed security measures to ensure compliance with Health Insurance Portability and Accountability Act (HIPAA) for all data stored in the cloud. For the NCI Cancer Genomics Cloud Pilots, the team will adapt the platform to meet the specific needs of the cancer research community as the develop over the course of the Pilot. If you are interested in following the work being done by Seven Bridges Genomics or giving feedback as their work on the NCI Cancer Genomics Cloud Pilots progresses, you can do so here.

We look forward to the journey ahead with Seven Bridges Genomics. You can learn more about AWS and Genomics here.

-- Matt Wood, General Manager, Data Science

by Jeff Barr (awseditor@amazon.com) at October 21, 2014 06:27 PM

StackOverflow

Retrieve large results ~ 1 billion using Typesafe Slick

I am working on a cron job which needs to query Postgres on a daily basis. The table is huge ~ trillion records. On an average I would expect to retrieve about a billion records per execution. I couldn't find any documentation on using cursors or pagination for Slick 2.1.0 An easy approach I can think of is, get the count first and loop through using drop and take. Is there a better and efficient way to do this?

by Shashi at October 21, 2014 06:26 PM

Collecting data from nested case classes using Generic

Is it possible to provide a generic function which would traverse an arbitrary case class hierarchy and collect information from selected fields? In the following snippet, such fields are encoded as Thing[T].

The snippet works fine for most scenarios. The only problem is when Thing wraps a type class (e.g. List[String]) and such field is nested deeper in the hierarchy; when it is on the top level, it works fine.

import shapeless.HList._
import shapeless._
import shapeless.ops.hlist.LeftFolder

case class Thing[T](t: T) {
  def info: String = ???
}

trait Collector[T] extends (T => Seq[String])

object Collector extends LowPriority {
  implicit def forThing[T]: Collector[Thing[T]] = new Collector[Thing[T]] {
    override def apply(thing: Thing[T]): Seq[String] = thing.info :: Nil
  }
}

trait LowPriority {
  object Fn extends Poly2 {
    implicit def caseField[T](implicit c: Collector[T]) =
      at[Seq[String], T]((acc, t) => acc ++ c(t))
  }

  implicit def forT[T, L <: HList](implicit g: Generic.Aux[T, L],
                                   f: LeftFolder.Aux[L, Seq[String], Fn.type, Seq[String]]): Collector[T] =
    new Collector[T] {
      override def apply(t: T): Seq[String] = g.to(t).foldLeft[Seq[String]](Nil)(Fn)
    }
}

object Test extends App {
  case class L1(a: L2)
  case class L2(b: Thing[List[String]])

  implicitly[Collector[L2]] // works fine
  implicitly[Collector[L1]] // won't compile
}

by Marcel Mojzis at October 21, 2014 06:18 PM

Sublime Text and Clojure: Don't pair single quotes

Is there a way to get a syntax type to define keyboard shortcuts, or to set a keyboard shortcut to depend on the syntax type (perhaps under the "context") setting?

My quoted lists '(1 2 3) get entered in like this: '(1 2 3)' because Sublime applies this helpful (but not in this case) behavior.

Here is the relevant bit of the Default (OSX).sublime-keymap file

// Auto-pair single quotes
{ "keys": ["'"], "command": "insert_snippet", "args": {"contents": "'$0'"}, "context":
    [
        { "key": "setting.auto_match_enabled", "operator": "equal", "operand": true },
        { "key": "selection_empty", "operator": "equal", "operand": true, "match_all": true },
        { "key": "following_text", "operator": "regex_contains", "operand": "^(?:\t| |\\)|]|\\}|>|$)", "match_all": true },
        { "key": "preceding_text", "operator": "not_regex_contains", "operand": "['a-zA-Z0-9_]$", "match_all": true },
        { "key": "eol_selector", "operator": "not_equal", "operand": "string.quoted.single", "match_all": true }
    ]
},
{ "keys": ["'"], "command": "insert_snippet", "args": {"contents": "'${0:$SELECTION}'"}, "context":
    [
        { "key": "setting.auto_match_enabled", "operator": "equal", "operand": true },
        { "key": "selection_empty", "operator": "equal", "operand": false, "match_all": true }
    ]
},
{ "keys": ["'"], "command": "move", "args": {"by": "characters", "forward": true}, "context":
    [
        { "key": "setting.auto_match_enabled", "operator": "equal", "operand": true },
        { "key": "selection_empty", "operator": "equal", "operand": true, "match_all": true },
        { "key": "following_text", "operator": "regex_contains", "operand": "^'", "match_all": true }
    ]
},

by Steven Lu at October 21, 2014 06:18 PM

CompsciOverflow

What's the difference between Adaptive Control and Hierarchical Reinforcement Learning?

After watching Travis DeWolf presentation on scaling neural computation, I'm a bit confused about the difference between Reinforcement Learning (whether hierarchical or not) and Adaptive Control. They both seem to be exploring environments and minimizing error through learning, but they're obviously used for very different applications. Can someone explain to me what's the difference between these two tasks and based off that difference, what would be a good way to combine them?

by Seanny123 at October 21, 2014 06:13 PM

Complexity of Recursive Function

We have to recursive function first one is

$\left\{ \begin{array}{l l} T(n) = \sqrt{n} T(\sqrt{n}) + n \\ T(1) = 1 \end{array} \right.$

and the second one

$\left\{ \begin{array}{l l} F(n) = 2 F(\sqrt{n}) + \log n \\ F(1) = 1 \end{array} \right.$

I think both of this functions have same complexity. that means if $F(n) \in \Theta(g(n))$ then $T(n) \in \Theta(g(n))$ because both of them built same recursive tree and the only difference between them are the values that they make. is this reasoning true? and another question is there any good book to study about recursive function?

by Karo at October 21, 2014 06:09 PM

StackOverflow

Is functional GUI programming possible?

I've recently caught the FP bug (trying to learn Haskell), and I've been really impressed with what I've seen so far (first-class functions, lazy evaluation, and all the other goodies). I'm no expert yet, but I've already begun to find it easier to reason "functionally" than imperatively for basic algorithms (and I'm having trouble going back where I have to).

The one area where current FP seems to fall flat, however, is GUI programming. The Haskell approach seems to be to just wrap imperative GUI toolkits (such as GTK+ or wxWidgets) and to use "do" blocks to simulate an imperative style. I haven't used F#, but my understanding is that it does something similar using OOP with .NET classes. Obviously, there's a good reason for this--current GUI programming is all about IO and side effects, so purely functional programming isn't possible with most current frameworks.

My question is, is it possible to have a functional approach to GUI programming? I'm having trouble imagining what this would look like in practice. Does anyone know of any frameworks, experimental or otherwise, that try this sort of thing (or even any frameworks that are designed from the ground up for a functional language)? Or is the solution to just use a hybrid approach, with OOP for the GUI parts and FP for the logic? (I'm just asking out of curiosity--I'd love to think that FP is "the future," but GUI programming seems like a pretty large hole to fill.)

by shosti at October 21, 2014 06:08 PM

/r/compsci

Follow up to old question post of mine: what's the farthest back in time you could send a modern computerised device (like a Nintendo 3DS) and have the scientists &/or engineers you gave it to understand the principles that underly it's function?

October 21, 2014 06:07 PM

CompsciOverflow

What's the difference between adaptive control and a kalman filter?

From my basic understanding of Adaptive Control, I understand that it uses the error and the velocity of the error to approximate the error in the solution space of a problem, thus allowing for guaranteed convergence under certain conditions and rapid adaptation to changing conditions. I've accumulated this knowledge basically from anecdotal evidence and from this video.

From my basic understanding of a Kalman Filter, it takes into account the error from past measurements to estimate the current state with greater accuracy.

From my flawed perspective, they seem almost identical, but what's the difference between the two? I've heard anecdotally that they are duals of each other, but that the Kalman Filter is only for linear systems? Is this close to the truth?

by Seanny123 at October 21, 2014 06:03 PM

DataTau

QuantOverflow

Variance of "hedged" term structure portfolio increasing?

I'm attempting to use PCA to hedge a small fixed income portfolio. I start with one particular bond and chose the nearest other bond to hedge the 1st principle component. This decreases the portfolio variance by about 50%.

When I add another bond and hedge the first two principle components, portfolio variance again drops by about 5%.

But when I add a third bond and try to hedge the first 3 PCs, the portfolio variance actually increases slightly compared to the 2-PC portfolio.

Is there some reason why adding more instruments would increase my total portfolio variance? Am I almost certainly doing some calculation wrong?

by user939259 at October 21, 2014 05:59 PM

/r/netsec

StackOverflow

How do I generate memoized recursive functions in Clojure?

I'm trying to write a function that returns a memoized recursive function in Clojure, but I'm having trouble making the recursive function see its own memoized bindings. Is this because there is no var created? Also, why can't I use memoize on the local binding created with let?

This slightly unusual Fibonacci sequence maker that starts at a particular number is an example of what I wish I could do:

(defn make-fibo [y]
  (memoize (fn fib [x] (if (< x 2)
             y
             (+ (fib (- x 1))
                (fib (- x 2)))))))

(let [f (make-fibo 1)]
  (f 35)) ;; SLOW, not actually memoized

Using with-local-vars seems like the right approach, but it doesn't work for me either. I guess I can't close over vars?

(defn make-fibo [y]
  (with-local-vars [fib (fn [x] (if (< x 2)
                                  y
                                  (+ (@fib (- x 1))
                                     (@fib (- x 2)))))]
    (memoize fib)))

(let [f (make-fibo 1)]
  (f 35)) ;; Var null/null is unbound!?! 

I could of course manually write a macro that creates a closed-over atom and manage the memoization myself, but I was hoping to do this without such hackery.

by ivar at October 21, 2014 05:57 PM

Planet Clojure

Why macros?

Yesterday a couple people asked me, “How and why do you use macros in a Lisp like Racket or Clojure?”.

I gave answers like:

  • The compiler can do a search-and-replace on your code.

  • You can make DSLs.

  • They’re an “API for the compiler”.

Although all true, I wasn’t sure I was getting the full idea across.

Worse, one time Peter Seibel was within earshot. Although I don’t know if he heard my explanation, I imagined him biting his tongue and politely remembering the “well, actually” rule. :)

Later I remembered Matthias Felleisen boiling down macros into three main categories:

  1. Binding forms. You can make your own syntax for binding values to identifiers, including function definition forms. You may hear people say, in a Lisp you don’t have to wait for the language designers to add a feature (like lambda for Java?). Using macros you can add it yourself. Binding forms is one example.

  2. Changing order of evaluation. Something like or or if can’t really be a function, because you want it to “short-circuit” — if the first test evaluates to true, don’t evaluate the other test at all.

  3. Abstractions like domain specific langagues (DSLs). You want to provide a special language, which is simpler and/or more task-specific than the full/raw Lisp you’re using. This DSL might be for users of your software, and/or it might be something that you use to help implement parts of your own program.

Every macro is doing one of those three things. Only macros can really do the first two, at all1. Macros let you do the last one more elegantly.

I think the preceding is a better answer. However, maybe it’s still not the best way to get people from zero to sixty on, “Why macros?”.2

Maybe the ideal is a “teachable moment” — facing a problem that macrology would solve.3 That’s also good because you really really really don’t want to use a macro when a normal function would suffice. So the goal isn’t to get people so enthusiastic about macros that they go forth in search of nails to which to apply that new hammer. Macros often aren’t the right approach. But once in a while, they are the bestest approach ever.

  1. A language like Haskell can choose lazy evaluation, and implement if as a function. I’m saying that only a macro can futz with whatever the default evaluation order is, be it eager or lazy. 

  2. Although I wrote a guide called Fear of Macros, it’s (a) specific to Racket macros and (b) much more about the “how” than the “why”. 

  3. Certainly that’s my own optimal learning situation, as opposed to getting answers or solutions before I have the questions or problems. 

by Greg Hendershott at October 21, 2014 05:56 PM

Hands-on with Clojure day 5

So I’ve fallen behind on the blogging, for a few reasons. Time to catch up.

I’m calling this “day 5” as a useful fiction. It’s a distillation of what is closer to days 5–7, or something like that.

As I mentioned before, this series of blog posts is going more directly from brain to web. Reflection and editing? Not so much.

Clojure port of wffi

I finished what I think is a reasonable initial port of wffi from Racket to Clojure. Pushed at clojure-wffi.

The simplest possible example is, given a horseebooks.md file like this:

# horseebooksipsum.com

Endpoint: http://horseebooksipsum.com

# Get

## Request
````
GET /api/v1/{paragraphs}
````

You can write:

1
2
(defwrappers "horseebooks.md")
(pprint (get {:paragraphs 2}))

Which prints:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
{:orig-content-encoding "gzip",
 :trace-redirects ["http://horseebooksipsum.com/api/v1/2"],
 :request-time 190,
 :status 200,
 :headers
 {"Content-Type" "text/plain",
  "Transfer-Encoding" "chunked",
  "Connection" "close",
  "Vary" "Accept-Encoding",
  "Cache-Control" "no-cache",
  "Server" "Apache/2.2.22 (Debian)",
  "Date" "Tue, 21 Oct 2014 18:07:16 GMT"},
 :body
 "Principle to work to make more money while having more fun. Unlucky people.
 And practical explanations. Process from preparation, through to delivery.
 And practical explanations. Process from preparation, through to delivery.
 And practical explanations. And practical explanations. Process from
 preparation, through to delivery. And practical explanations. And practical
 explanations. And practical explanations.\n\nDon't stubbornly. This is a
 very special technique that I have never seen. Don't stubbornly. This is
 a very special technique that I have never seen. And practical explanations.
 Don't stubbornly. Principle to work to make more money while having more fun.
 Unlucky people. Process from preparation, through to delivery. Don't
 stubbornly. Process from preparation, through to delivery. And practical
 explanations. This is a very special technique that I have never seen.
 And practical explanations. And practical explanations.\n\n"}

Of course this simple example doesn’t show much value-add. But real-world web services often have numerous parameters allocated among URL path segments, query parameters, and headers. With wffi, useful keyword wrapper functions are automatically generated from a markdown file that both documents and specifies the web service.

If I weren’t at Hacker School, I would spend much more time polishing and refining this. However this project is really just a means to the end of learning Clojure. So I’m going to force myself to task-switch to something else, next. I’ll return to this project if/when it seems like the best vehicle to learn more.

split-with and lazy seqs

Previously I posted that split-with seems to have an inefficient implementation.

Needing something like Racket’s splitf-at, I wrote a quick and dirty version in Clojure:

1
2
3
4
5
6
(defn split
  "FIXME: This is the conceptual, inefficient implementation. Should
  reimplement like Racket's splitf-at."
  [pred coll]
  [(take-while pred coll)
   (drop-while pred coll)])

This isn’t great because it traverses the first portion of the collection twice.

Someone pointed out that Clojure already provides this. It’s called split-with. Nice. But when I M-., I see that its definition is my conceptual one, not the efficient one.

Racket defines splitf-at like so:

1
2
3
4
5
6
7
(define (splitf-at list pred)
  (unless (procedure? pred)
    (raise-argument-error 'splitf-at "procedure?" 1 list pred))
  (let loop ([list list] [pfx '()])
    (if (and (pair? list) (pred (car list)))
      (loop (cdr list) (cons (car list) pfx))
      (values (reverse pfx) list))))

I “ported” this to Clojure like so:

1
2
3
4
5
6
(defn efficient-split-with
  [pred coll]
  (loop [ps [], coll coll]
    (if (and (seq coll) (pred (first coll)))
      (recur (conj ps (first coll)) (rest coll))
      [ps coll])))

One neat thing is the use of conj with a vector means we don’t have to do the reverse like we do in Racket, which should be even more efficient.

So why does Clojure implement split-with the way it does? David Nolen pointed out that I was forgetting about laziness. Aha.

In connection with this I learned about “chunked sequences” in Clojure from a Fogus blog post. Chunked sequences were added as an optimization in v1.1. The force granularity was increased from 1 item to 32.

Someone else pointed out that, had transducers been a thing, maybe lazy seqs wouldn’t be needed. (At least not as a default policy. You could have something like Racket’s racket/stream, with laziness and memoization.)

I already understood, in theory, that side effects expose the difference between eager and lazy evaluation. I learned, hands on, that this includes side effects like ad hoc debugging printlns.1 For example if you have:

1
2
3
4
5
6
7
(let [coll (map (fn [x]
                  ;; 0: some bug that will throw an exception
                  )
                coll)
      _ (println "Everything AOK thus far -- or maybe not!")]
  ;; 1: use coll in way that forces the lazy seq
  )

The error won’t occur at 0, it will only occur at 1. The progress println will be misleading. At least it misled me, for awhile, about the actual location of a bug.

With Clojure and Haskell, I’ll need to keep in mind when and where laziness is used as the default policy. If I understand correctly, in Clojure that means lazy sequences, and in Haskell lazy evaluation generally.

Load vs. modules

After about a week hands-on with Clojure one of the things I miss the most from Racket is modules. Not even Racket’s submodules. Just plain modules.

Clojure namespaces handle name collisions. But modules go further:

  1. Forward references are OK.

  2. Redefinitions are flagged as errors.

  3. Deleted definitions actually disappear from the environment on re-eval of the source file.

In other words, Clojure seems to be like Racket’s #lang racket/load, which isn’t recommended for general use.

An example scenario: I rename a function from foo to bar. I overlook updating a call site. Clojure doesn’t warn me with an error. Worse, I change bar’s behavior. But old foo still exists — and is being used at the overlooked call site. Hilarity and gnashing of teeth ensues.

This isn’t a hypothetical example. It’s happened to me a couple times in not that many days of hands-on with Clojure. On the one hand, this seems like an insane programming workflow. On the other hand, I have already learned to “measure twice, cut once” when renaming — and to bite the bullet and invest 10 seconds in a cider-restart. So if life had to be this way, I could cope.

But why does it have to be this way? I actually started to draft a module macro for Clojure. As I put in its README:

DISCLAIMER: This is a first draft by someone who…

  • has been hands-on with Clojure for just a week

  • doesn’t necessarily appreciate how Clojure namespaces work

  • doesn’t know the complete story behind Racket modules

I can imagine Inigo Montoya telling me, “I do not think module means what you seem to think it means”. Yeah. That’s probably me, with this code. At the moment it’s an exercise in starting to think about what might be involved.

Conclusions and next steps

There is a lot about Clojure that I really, really like and enjoy. At times I do wish it had better tooling and were built on a more-rigorous foundation.

I need to determine what to do next — spend more time with Clojure, or move on to Haskell. Also I need and want to spend significantly more time pairing with people here — which will at least partially entail working with a variety of other languages and platforms.

So it’s likely that I’ll take at least a brief break from Clojure. But I’ll return at some point.

  1. Are printlns a sophisticated debugging technique? Nope. But some experienced programmers use them as a quick first resort (even when they’re willing and able to fire up a real debugger). 

by Greg Hendershott at October 21, 2014 05:54 PM

/r/netsec

StackOverflow

zmq vs redis for pub-sub pattern

redis supports pub-sub
zmq also supports pub-sub via a message broker

What would be the architectural pros\cons for choosing between them?
I'm aiming at points which are beyond the obvious use-case specific performance benchmarking that should be done (here's a nice example).

Assume use of a high-level language such as Python.

by Jonathan at October 21, 2014 05:27 PM

ZMQ patterns; send then receive

I have been reading up on zmq design patterns but I haven't seem to find that fits my need.

1. Box A sends info (json) to Box B and C; B and C gets different info from each other  
2. Boxes B and C do some work based on info received from Box A  
3. After finishing the work, Boxes B and C sends result back to Box A    

Forwarder device (http://learning-0mq-with-pyzmq.readthedocs.org/en/latest/pyzmq/devices/forwarder.html) can achieve step 1 and 2 but not 3, correct?

Are there any patterns I can use to achieve?
Is it simple request/reply pattern?
If so, is there a centralized request/reply pattern so that Box A doesnt pick Boxes B and C but rather Box A sends info to something central and it knows to send to Boxes B and C and send the result back to Box A?

by ealeon at October 21, 2014 05:04 PM

What is the 'parallel' concept in Rich Hickey's transducers Strange Loop talk?

In the Strange Loop presentation on Transducers Rich Hickey mentions at a concept in a table called 'parallel'.

enter image description here

You can easily see examples of seqs and into and channels using transducers.

Now you can work out that Observables are talking about RxJava.

My Question is What is the 'parallel' concept in Rich Hickey's transducers Strange Loop talk? Is this a list of futures, or pmap or something else?

by hawkeye at October 21, 2014 05:00 PM

ansible: sort of list comprehension?

Given this inventory:

[webservers]
10.0.0.51   private_ip='X.X.X.X'
10.0.0.52   private_ip='Y.Y.Y.Y'
10.0.0.53   private_ip='Z.Z.Z.Z'

How can I get a list of the private ips of the webservers?

webservers_private_ips: "{{  }}"  # ['X.X.X.X', 'Y.Y.Y.Y', 'Z.Z.Z.Z']

I know groups['webservers'] will give me this list ['10.0.0.51', '10.0.0.52', '10.0.0.53'] and I can get the private_ip of one with:

{{ hostvars[item]['private_ip'] }}
with_items: groups['webservers']

But I would like to declare a variable in my var file directly and not have a task to register it. It would be nice if something like the following could be done:

webservers_private_ips: "{{ hostvars[item]['private_ip'] }}  for item in groups['webservers']" 

by YAmikep at October 21, 2014 04:54 PM

Importing a text file into Cassandra using Spark when there are multiple variable types

I'm using Spark to import data from text files into CQL tables (on DataStax). I've done this successfully with one file in which all variables were strings. I first created the table using CQL, then in the Spark shell using Scala ran:

val file = sc.textFile("file:///home/pr.txt").map(line => line.split("\\|").map(_.toString));
file.map(line => (line(0), line(1))).saveToCassandra("ks", "ks_pr", Seq("proc_c", "proc_d"));

The rest of the files I want to import contain multiple variable types. I've set up the tables using CQL and specified the appropriate types there, but how do I transform them when importing the text file in spark?

by eamcvey at October 21, 2014 04:51 PM

/r/clojure

What would be an easy way to install Cider 0.7.0 ?

Installing Cider from MELPA repository using emacs' package-install gives a 0.8.0-SNAPSHOT version. Installing Cider from MELPA-Stable gives version 0.6.0. Does anybody know of an easy way to install Cider 0.7.0?

submitted by daslu
[link] [4 comments]

October 21, 2014 04:48 PM

Integrating ClojureScript into an existing application

Hi everyone,

I'm the lead on a team that is moving to Clojure. We have an existing application that's already pretty large. Because of the way we chose to architect it, by far the largest part of the codebase is on the front end (the middle tier is RESTful and as passthrough as possible).

We're in the process of converting the back end from Groovy over to Clojure. My question here is about the front end.

I admit to being intrigued by ClojureScript, although I really do love JavaScript. Our front end is AngularJS supported by a handful of libraries like underscore and moment.

My question is this. Has anyone integrated ClojureScript into an existing JS codebase? Can you do it non-intrusively? I get that there's nothing theoretically stopping me from doing it, I was just wondering if anyone has any real life stories about how it worked out from a implementation and maintenance perspective.

submitted by jeremy1015
[link] [1 comment]

October 21, 2014 04:47 PM

StackOverflow

Heroku and Leiningen: where did my files go?

I have a Leiningen project that is dependent on another Leiningen project. Both are on Github. I cloned the project I am dependent on to the checkouts folder as a Git submodule, which works great in my development environment. I can use the classes from the dependency without even having to add it as a dependency in projects.clj (despite the fact that the documentation says "If you have a project in checkouts without putting it in :dependencies then its source will be visible but its dependencies will not be found").

The main problem is that when I push the project to Heroku, the submodules are cloned automatically but there is no checkouts directory under /app. I guess that Heroku ignores checkouts for some reason.

Presumably I am doing this wrong and there's a right way for me to work in parallel with two Git repos, one of which is dependent on the other. The main issue for me is that I need to be able to deploy my app easily to Heroku. What is the standard way to deal with this situation?

Update: I also noticed that my circle.yml file, which is in the repo, is not in the /app directory. I'm totally confused about what exactly is in the /app directory and where the other stuff disappeared to.

by Matthew Gertner at October 21, 2014 04:42 PM

Get JsonSubType array contents in Scalatra project

I'm using JsonSubTypes (com.fasterxml.jackson.annotation.JsonSubTypes) in a Scalatra project and wanted to have a method either in the ProblemFactory class below or the servlet which returns a list of the "name"s in the JsonSubTypes Array.

   @JsonTypeInfo(
      use = JsonTypeInfo.Id.NAME,
      include = JsonTypeInfo.As.PROPERTY,
      property = "type")
    @JsonSubTypes(Array(
      new Type(value = classOf[AdditionProblemFactory], name = "addition"),
      new Type(value = classOf[EvenOddProblemFactory], name = "evenodd")
    ))
    abstract class ProblemFactory {

I've played around the reflection a bit, but can't seem to extract the names:

ru.typeOf[ProblemFactory].typeSymbol.asClass
.annotations
.find(a => a.tree.tpe == ru.typeOf[JsonSubTypes])

by ajnatural at October 21, 2014 04:38 PM

Fred Wilson

Getting Feedback and Listening To It

When you are VC, you live in this protected environment. You sit in your office in a glass conference room with lovely views and entrepreneurs walk in and pitch you and you get to decide who you are going to back and who you are not. People tell you what they think you want to hear. That you are so smart. That you are so successful. They suck up to you. And it goes to your head. You believe it. I am so smart. I am so successful.

You have to get out of that mindset because it is toxic. My number one secret is the Gotham Gal who brings me down to earth every night, makes me do the dishes, walk the dog, and lose to her in backgammon. Actually I have not lost to her in backgammon in over twenty years because she used to beat me so badly that I couldn’t take it anymore.

But blogging is another helpful tool in reminding yourself that you are not all that. Marc Andreessen said as much in his excellent NY Magazine interview which was published yesterday. I loved the whole interview but I particularly loved this bit:

So how do you, Marc Andreessen, make sure that you are hearing honest feedback?

Every morning, I wake up and several dozen people have explained to me in detail how I’m an idiot on Twitter, which is actually fairly helpful.

Do they ever convince you?

They definitely keep me on my toes, and we’ll see if they’re able to convince me. I mean, part of it is, I love arguing.

No, really?

The big thing about Twitter for me is it’s just more people to argue with.

Keeping someone on his or her toes, making them rethink their beliefs, making them argue them, is as Marc says “fairly helpful.” That’s an understatement. It is very very helpful.

That’s the thing I love about the comments here at AVC. I appreciate the folks who call bullshit on me. There are many but Brandon, Andy, and Larry are common naysayers. They may come across as argumentative, but arguing is, as Marc points out, useful.

The comments are also a place where people play the suck up game. It isn’t necessary to do that and I don’t appreciate it. It makes me uneasy.

So I would like to thank the entire AVC community for being a sounding board for my ideas, for pushing back when I am off base, and for resisting the suck up whenever the urge presents itself. I appreciate it very much.

by Fred Wilson at October 21, 2014 04:32 PM

/r/compsci

StackOverflow

Why it's not possible to add multiple new objects to a list in Scala?

I need to add multiple custom objects to Scala List. I am getting error in Scala worksheet while:

var l: List[(Char, Int)]= List(new ('A', 2), new ('B', 1) )

How to fix it?

by RCola at October 21, 2014 04:29 PM

Dave Winer

CompsciOverflow

Fixed size set to contain the maximum number of given sets

I asked this question in SO here

I have about 1000 sets of size <=5 containing numbers 1 to 100.

{1}, {4}, {1,3}, {3,5,6}, {4,5,6,7}, {5,25,42,67,100} ... 

Is it possible to find a set of size 20 that contains the maximum number of given sets?

In other words, let $U = \{1, 2, ..., 100\}$ and $S \subseteq \{Z \in 2^U \mid |Z| \leq 5\}$. How can I find $X \subseteq U$ with $|X| = 20$ such that $|\{Y \in S \mid Y \subseteq X\}|$ is maximized?

Checking each of 100!/(80!*20!) sets or bulding all set combinations with size <=20 is inefficient.

Is there any (semi)efficient solution or is it np-complete?

The set {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20} contains 5 sets.

by albert at October 21, 2014 04:11 PM

Proving NP-hardness of strange graph partition problem

I am trying to show the following problem is NP-hard.

Inputs: Integer $e$, and connected, undirected graph $G=(V,E)$, a vertex-weighted graph

Output: Partition of $G$, $G_p=(V,E_p)$ obtained by removing any $e$ edges from $E$ which maximizes

$$\max \sum\limits_{G_i \in \{G_1,G_2,...,G_k\}} \frac1{|G_i|}\left(\sum_{v_j \in V_i}w(v_j)\right)^{\!2},$$

where $G_p=G_1 \cup G_2 \cup \dots \cup G_k$ and elements of $G$ are disjoint.
$V_i$ is the vertex set for $G_i$ and $w(v_j)$ is the weight for vertex $v_j$

Plain English explanation: We wish to partition a graph by removing $e$ edges to maximize an objective. The objective computes for each of the resulting disjoint subgraphs the sum of the vertices for the subgraph, squares that value, and divides by the cardinality. Finally we sum this over all subgraphs.

So far I have attempted to reduce from NP-hard problems such as ratio-cut, partition (non-graph problem), and max multicut. I've also attempted to show special cases of the problem are NP-hard (less ideal). The reason I suspect this problem to be NP-hard (besides most graph partitioning problems being NP-hard) is the presence of the cardinality term and cross terms between partition weights. Any input/problem suggestions would be helpful. A NP-hard proof for any kind of specific graph would be useful.

by Optimizer at October 21, 2014 04:06 PM

Fefe

Da ist er, der Aufschwung! Deutschland hat sich so ...

Da ist er, der Aufschwung! Deutschland hat sich so weit aufgeschwungen, dass wir Israel mal eben 300 Millionen Steuergelder schenken. Damit sie ein paar schöne Fregatten kaufen können bei uns.

Gut, vielleicht hat das auch mit dem Aufschwung nichts zu tun, denn so richtig rund laufen die nicht, die Fregatten aus Deutschland.

October 21, 2014 04:01 PM

DataTau

TheoryOverflow

Are there problems for which divide-and-conquer / recursion is provably useless?

When we try to construct an algorithm for a new problem, divide-and-conquer (using recursion) is one of the first approaches that we try. But in some cases, this approach seems fruitless as the problem becomes much more complicated as its input grows.

My question is: are there problems for which we can prove that a divide-and-conquer approach cannot help to solve? In the following lines I try to make this more formal.

Let $P(n)$ be a certain problem whose input has size $n$ (e.g. a problem that accepts an input an array of $n$ numbers). Suppose we have a recursive algorithm for solving $P(n)$. The recursive runtime of that algorithm is calculated assuming an oracle which can solve $P(k)$ for every $k<n$ in constant time. For example:

  • The recursive runtime of binary search is $O(1)$, since it uses only a comparison and two recursive calls.
  • The maximum element in an array can be found in recursive time $O(1)$.
  • The recursive runtime of merge sort is $O(n)$, because of the merging step.

The recursive time is usually smaller than the actual runtime, which reflects the fact that the recursive algorithm is simpler than a straightforward non-recursive solution to the same problem.

Now my question is:

Is there a problem which can be solved in time $f(n)$, but provably has no recursive algorithm with recursive runtime asymptotically less than $f(n)$?

Some specific variants of this question are:

  • Is there a problem in $P$ which has no algorithm with recursive runtime $O(1)$? (Maybe sorting?)
  • Is there a problem with an exponential algorithm which has no algorithm with polynomial recursive runtime?

EDIT: contrary to my guess, sorting has an algorithm with recursive runtime $O(1)$. So it is still open, whether there a problem in $P$ which has no algorithm with recursive runtime $O(1)$.

by Erel Segal Halevi at October 21, 2014 03:59 PM

/r/netsec

TheoryOverflow

Syntactic Complexity Class ${\bf X}$ such that ${\bf PP} \subseteq {\bf X} \subseteq {\bf PSPACE}$

It is known that some (non-relativized) syntactic complexity classes between ${\bf P}$ and ${\bf PSPACE}$ have the following property, ${\bf P} \subseteq {\bf CoNP} \subseteq {\bf US} \subseteq {\bf C_=P} \subseteq {\bf PP} \subseteq {\bf PSPACE}$. I am wondering if there exists a (non-relativized) syntactic complexity class ${\bf X}$ such that ${\bf PP} \subseteq {\bf X} \subseteq {\bf PSPACE}$? What are the implications of existence or non-existence of complexity class ${\bf X}$ ?

by Tayfun Pay at October 21, 2014 03:49 PM

StackOverflow

Scala difference between object and class

I'm just going over some Scala tutorials on the Internet and have noticed in some examples an object is declared at the start of the example.

What is the difference between class and object are as far as Scala is concerned?

by steve at October 21, 2014 03:49 PM

/r/compsci

Discrete math help? (Primes)

I'm taking first year discrete math and I've got a question I can't solve; can you guys please help?

Show ∀n∈ℤ+ ; n = k * l ; (1 < k < l < n ) ⇒ k < √n

(ℤ+ = positive integers)

Any tips or suggestions will go a long way.

I've already been able to prove by contradiction/negation, but we're in the primes unit so I'm pretty sure it isn't that easy.

Thanks!

submitted by serg06
[link] [19 comments]

October 21, 2014 03:48 PM

CompsciOverflow

Determining Number of States in a Turing Machine

I am looking at an example Turing machine in my textbook, Automata and Computability by Dexter C. Kozen, and I'm confused as to how they determine the number of states this particular machine has. Here is the example:

"Here is a TM that accepts the non-context free set $\{a^nb^nc^n \mid > n\geq 0\}$. Informally, the machine starts in its start state s, then scans to the right over the input string, checking that it is of the form $a^* b^* c^*$. It doesn't write anything on the way across (formally, it writes the same symbol it reads). When it sees the first blank symbol _, it overwrites it with a right endmarker ]. Now it scans left, erasing the first c it sees, then the first b it sees, then the first a it sees, until it comes to the [. It then scans right, erasing one a, one b, and one c. It continues to sweep left and right over the input, erasing one occurrence of each letter in each pass. If on some pass it sees at least one occurrence of one of the letters and and no occurrences of another, it rejects. Otherwise, it eventually erases all the letters and makes one pass between [ and ] seeing only blanks, at which point it accepts.

Formally, this machine has $Q = \{s, q_1, ... , q_{10}, q_a, q_r\}, Σ > = \{a,b, c\}, Γ = \Sigma ∪ \{[, \_, ]\}$" (page 211, Example 28.1)

Are they simply creating states based on their informal definition? Or is there some methodology they are implementing that determines the number of states? If there is some sort of methodology, is it a general methodology that can be applied to other Turing machines? Any help regarding this would be greatly appreciated.

by tdark at October 21, 2014 03:44 PM

StackOverflow

Setting a third-party plugin setting in sbt AutoPlugin

I have an AutoPlugin which aggregates several third-party plugins and customizes their settings for our company. For most of the plugins, this works just fine by putting them in the projectSettings:

override lazy val projectSettings = Seq( somePluginSetting := "whatever" )

I tried to do this for ScalaStyle as well:

import org.scalastyle.sbt.ScalastylePlugin.scalastyleConfigUrl

override lazy val projectSettings = Seq(
  scalastyleConfigUrl := Some(url("http://git.repo/scalastyle-config.xml"))
)

This setting is never visible in projects using my plugin, instead sbt uses the plugin-provided default value:

> inspect scalastyleConfigUrl
[info] Setting: scala.Option[java.net.URL] = None
[info] Description:
[info]  Scalastyle configuration file as a URL
[info] Provided by:
[info]  {file:/Users/kaeser/Documents/workspace/ci-test-project/}root/*:scalastyleConfigUrl
[info] Defined at:
[info]  (org.scalastyle.sbt.ScalastylePlugin) Plugin.scala:101
[info] Delegates:
[info]  *:scalastyleConfigUrl
[info]  {.}/*:scalastyleConfigUrl
[info]  */*:scalastyleConfigUrl
[info] Related:
[info]  test:scalastyleConfigUrl

When I put the setting into build.sbt directly, it works as expected. What might the issue be?

by Justin Kaeser at October 21, 2014 03:31 PM

CompsciOverflow

Weighted undirected graphs, complex Laplacian, complex eigenvalues & spectral clusering

I am rather puzzled and confused, I have been trying to get a clear understanding of how would spectral clustering work for an undirected weighted graph, I have used the normalized Laplacian, but I always get complex not strictly positive eigenvalues, all the resources I am finding build on the results that the Laplacian is real symmetric positive semi-definite matrix, hence real non-negative eigenvalues.

Any guidance is greatly appreciated,

also if I take the norm of the normalized Laplacian, would spectral clustering algorithms be still valid with same results.

by Judy at October 21, 2014 03:25 PM

Planet Emacsen

Irreal: Emacs 24.4 is Released

Emacs 24.4 is finally with us. You can go to the GNU Emacs site to get a copy. When I downloaded it, the mirrors had not yet been updated so I just went to the primary FTP server to get my copy.

It compiled without problem. You can just follow the INSTALL file instructions, perhaps going to the INSTALL file for your particular platform but the TL;DR for the Mac is

./configure --with-ns
make
sudo make install

Then (for the Mac) you have to drag Emacs.app in the nextstep directory to /Applications. It almost takes less time to do it than it does to describe the process.

When I brought the new Emacs up, I had two problems (at least so far). First, it wasn't loading ace-window because it couldn't find the file, even though it was there. I deleted it from ELPA and then readded it and it worked again.

Second, I have Emacs configured to split the frame horizontally so that I have two side by side windows when I start. The frame split during initialization but then killed one of the windows so that I had a single wide window. I solved that by setting disabling desktop-save-mode:

(desktop-save-mode nil)

It still remembers my open buffers across invocations so it's just like it was before. This is no doubt because of the new session-saving features: I'll have to investigate it more later.

This is my second post written in Emacs 24.4 and as you can see it's working just fine. I doubt any Irreal Emacsers need the reminder but you should definitely upgrade. It's really easy, even if you compile from source.

by jcs at October 21, 2014 03:22 PM

StackOverflow

Why does rJava not work on Ubuntu 14.04 using OpenJDK 7?

Hi I'm having issues with the rJava package from cran.

I have installed

sudo apt-get install openjdk-7-jdk
sudo apt-get install r-cran-rjava

and ran

sudo R CMD javareconf
# Java interpreter : /usr/bin/java
# Java version     : 1.7.0_55
# Java home path   : /usr/lib/jvm/java-7-openjdk-amd64/jre
# Java compiler    : /usr/bin/javac
# Java headers gen.: /usr/bin/javah
# Java archive tool: /usr/bin/jar

I then try to run R and load rJava and get the following error:

R
> library(rJava)
Error : .onLoad failed in loadNamespace() for 'rJava', details:
  call: dyn.load(file, DLLpath = DLLpath, ...)
  error: unable to load shared object '/usr/lib/R/site-library/rJava/libs/rJava.so':
  libjvm.so: cannot open shared object file: No such file or directory
Error: package or namespace load failed for ‘rJava’

I'm on Ubuntu 14.04 64 bit and am using R version 3.1.0 (2014-04-10) -- "Spring Dance"

UPDATE: Actually this is not specific to OpenJDK, I just tried oracle java 8 and got the same result. Also I found this workaround here which I am reluctant to use since it is indeed a workaround and doesn't really explain why it's necessary. The package system should have handled this in my opinion. Seems like libjvm.so is the problem and I have it located here

/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/jamvm/libjvm.so
/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/libjvm.so
/usr/lib/jvm/java-7-oracle/jre/lib/amd64/server/libjvm.so

and for some reason rJava fails to find them despite updating with sudo R CMD javareconf.

UPDATE 2: The plot thickens: If I run R as sudo it works.

Thankful for pointers.

by Dr. Mike at October 21, 2014 03:21 PM

/r/netsec

/r/clojure

CompsciOverflow

What are system clock and CPU clock; and what are their functions?

While reading a book, I came across a paragraph given below:

In order to synchronize all of a computer’s operations, a system clock—a small quartz crystal located on the motherboard—is used. The system clock sends out a signal on a regular basis to all other computer components.

And another paragraph:

Many personal computers today have system clocks that run at 200 MHz, and all devices (such as CPUs) that are synchronized with these system clocks run at either the system clock speed or at a multiple of or a fraction of the system clock speed.

Can anyone kindly tell:

  • What is the function of system clock? And what is meant by synchronize in first paragraph?
  • Is there any difference between System Clock and CPU Clock? If yes, then what is the function of CPU clock?

by swdeveloper at October 21, 2014 03:15 PM

Planet Theory

Martin Gardner Centennial

Martin Gardner was born on October 21, 1914, so today is his Centennial (he died on May 22, 2010, at the age of 95). We've mentioned him in the blog before:

  1.  The Life of Martin Gardner
  2.  Contribute to the Gardner Centennial
  3.  Another Post on Martin Gardner
  4. I used the anagram Tim Andrer Gran in both my review of the Lipton-Regan book (see here) and my Applications of Ramsey Theory to History paper (see here)

So what can I add on his centennial?

  1. He was not the first person to write on recreational mathematics, but he was certainly early and did it for a long time.
  2. I suspect he influenced everyone reading this who is over 50. For every y, y is under 50 and reading this column, there exists x such that MG influenced x and x influenced y.
  3. The line between ``recreational'' and ``serious'' math is sometimes blurry or hard to see. An obvious case of this was Euler and the Bridges problem leading to graph theory. At one time solving equations was done for competition, which seems recreational. Galois theory is not recreational. 
  4. Donald Knuth's book Selected Papers in Discrete Math (reviewed by me here) states I've never been able to see the boundary between scientific research and game playing.
  5. I am reading a book  Martin Gardner in the 21st century which is papers by people who were inspired by him. The papers really do blur the distinction between recreational and serious. Some are rather difficult but all start out with a fun problem.
  6. Aside from recreational math he did other things- magic, and debunking bad science.  (Fads and Fallacies in the name of science was excellent.) He was a well rounded person which is rare now. 
  7. Brian Hayes and Ian Stewart and others do what he did, but given the times we live in now, its hard capture the attention of a large segment of the public. (analogous to that when I was a kid there were only a handful of TV stations, now there are... too many?)
  8. When I was in high school I went to the library looking for math books I could read (naive?). I found one of his books (collection of his columns) and began reading it. I learned about casting out nines and I learned what was to be the first theorem I ever learned a proof of outside of class (given that I was probably 12 it may be the first proof I learned ever). It was that (in todays lang) a graph is Eulerian iff every vertex is even degree.

by GASARCH (noreply@blogger.com) at October 21, 2014 03:09 PM

Planet Clojure

Local State, Global Concerns

CircleCI’s recently open-sourced frontend is built in ClojureScript using Om. Combining Clojure’s functional primitives and React’s programming model yields a uniquely powerful approach to user interfaces. Previously complex features, such as efficient undo, become trivially simple to implement. The simple versions turn out to be even more powerful. You don’t just get efficient undo, you also gain the ability to serialize the entire state of your application to inspect, debug, or reload! While the promise of snapshotting app state has … Continue reading

by CircleCI at October 21, 2014 03:04 PM

StackOverflow

More elegant way to handle error and timeouts in core.async?

Of course I want to wrap various requests to external services with core.async, while still returning results from these operations through some chan.

I want to take care of both thrown exceptions and timeouts (ie that the operation takes longer than expected to return, or to be able to choose among various services for the same task but with different approaches or qualities of service.

The smallest viable example to show examples of both being able to handle an error, a timeout and a correct returning result seems to be these:

(require '[clojure.core.async :refer [chan go timeout <! >! alt!]])

(def logchan (chan 1))

(go (loop []
      (when-let [v (<! logchan)]
        (println v)
        (recur))))

(dotimes [_ 10] 
  (go 
    (let [result-chan  (chan 1)
          error-chan   (chan 1)
          timeout-chan (timeout 100)]
      (go
        (try 
          (do (<! (timeout (rand-int 200)))
              (>! result-chan (/ 1 (rand-int 2))))
          (catch Exception e (>! error-chan :error))))
      (>! logchan (alt! [result-chan error-chan timeout-chan] 
                    ([v] (if v v :timeout)))))))

This code prints something like

1
:error
1
:error
:error
:timeout
:error
:timeout
:timeout

This is not very elegant. I especially don't like the way of returning :error and :timeout. The nil-check in alt! is clearly not what I want either.

Is there some better way to accomplish the three goals of returning result, protect from long timeouts and handle errors? The syntax is quite OK (most things above are really to provoke those three errors).

by claj at October 21, 2014 02:49 PM

/r/systems

Planet Emacsen

Emacs Redux: Emacs 24.4

Emacs 24.4 is finally out!

You can read about all the new features here. I’ve published a series of articles about some of the more interesting features.

In related news - the Emacs Mac Port based on 24.4 has also been released.

by Bozhidar Batsov at October 21, 2014 02:36 PM

Jeff Darcy

Distributed Systems Prayer

Forgive me, Lord, for I have sinned.

  • I have written distributed systems in languages prone to race conditions and memory leaks.

  • I have failed to use model checking when I should have.

  • I have failed to use static analysis when I should have.

  • I have failed to write tests that simulate failures properly.

  • I have tested on too few nodes or threads to get meaningful results.

  • I have tweaked timeout values to make the tests pass.

  • I have implemented a thread-per-connection model.

  • I have sacrificed consistency to get better benchmark numbers.

  • I have failed to measure 99th percentile latency.

  • I have failed to monitor or profile my code to find out where the real bottlenecks are.

I know I am not alone in doing these things, but I alone can repent and I alone can try to do better. I pray for the guidance of Saint Leslie, Saint Nancy, and Saint Eric. Please, give me the strength to sin no more.

Amen.

October 21, 2014 02:35 PM

Planet Theory

Algorithms that never get coded up

(There was a passing ref to this topic in the comments to one of Scott's blogs, so I thought I would pick up the theme.)

When I teach Formal Lang Theory I end up teaching them many algorithms that I don't think are ever coded up. Or if they are, I doubt they are used. A common refrain in my class is

Could you code this up?

Would you want to?

If the answers are YES and NO then they are enlightened.

Here are some examples of algorithms that are commonly taught but are never really used. They are still good to know

  1. The algorithm that takes a DFA and makes a Regular Expression out of it. (I use the R(i,j,k) algorithm- R(i,j,k) is the set of all strings that take you from state i to state j using a subset of  {1,...,k} as intermediaries. One computes a reg exp for R(i,j,k) via dynamic programming. The actual reg expressions may get very long so I do not know if its poly time. But its not coded up since there is never a reason to go from a DFA to a Reg Exp. (There IS a reason to go from a Reg Exp to a DFA). Note that it IS good to know that REG EXP = DFA= NDFA so this IS worth teaching and knowing, AND its a nice example of Dyn Programming.
  2. The algorithm that takes any CFG and puts it in Chomsky Normal form. The only grammars really used now (for compliers) are of a much more specialized type.
  3. The algorithm that shows that and CFL is in P (O(n^3), though you can do better) that uses Chomsky Normal Form. Again, Nobody uses general grammars. Still good to know that CFL's are in P, and again a nice Dyn Programming algorithm.
  4. Converting a PDA to a CFG. I doubt this is ever done. The converse is of course a key to compliers. But the converse is done for very specialized grammars, not general ones. Again though, good to know that CFG's and PDA's are equivalent. The PDA to CFG conversion is NOT nice.
  5. Converting a Blah-tape Blah-head Turing Machine to a Blah'-tape, Blah'-head Turing Machine.  Important to know you can do it, not important to know the details, not worth coding up unless you are trying to win that contest where you want to get really small Univ TM's
  6. All of the reductions in NP-completeness. I doubt any are actually done. Since we have very good SAT solvers there may be algorithms to convert problems TO SAT problems.
If I am wrong on any of these and something on the above list IS worth coding up and HAS been code up and IS being used, then I will be delighted to hear about it.

by GASARCH (noreply@blogger.com) at October 21, 2014 02:34 PM

/r/netsec

StackOverflow

clojure newbie how to call methods of a java object

I'm trying the following

  (def myMap (HashMap.))
  (doto (myMap) (.put "a" 1) (.put "b" 2))

I get as a result:

Reflection warning, core.clj:20:3 - call to method put can't be resolved (target class is unknown).
Reflection warning, core.clj:20:3 - call to method put can't be resolved (target class is unknown).

Am I doing anything wrong?

by Jas at October 21, 2014 02:15 PM

/r/compsci

StackOverflow

Clojure - difference between quote and syntax quote

(def x 1)
user=> '`~x
x
user=> `'~x
(quote 1)

Can anyone explain please how it is evaluated step by step?

by user1998149 at October 21, 2014 02:05 PM

TheoryOverflow

What if an $\mathsf L$-complete problem has $\mathsf{NC}^1$ circuits?

In other words, is there a result comparable to the Karp-Lipton theorem starting from the assumption $L\in\mathsf{NC}^1/\mathsf{poly}$ with $L$ an $\mathsf L$-complete language (under, say, $\mathsf{AC}^0$ reductions)? By "comparable" I mean that it derives non-trivial consequences (perhaps considered unlikely) from the assumption.

(Given the lack of answers to my previous question on the relationship between $\mathsf{NC}^1$ and $\mathsf L$, I suppose that no similar result is known. However, I thought I would ask anyway, in the hope that, formulated in this more specific way, the question would attract more readers).

by Damiano Mazza at October 21, 2014 02:02 PM

StackOverflow

Organizing multiple scala interrelated sbt & git projects - best practice suggestions

With scala, using sbt for builds and git for version control, what would be a good way of organizing your team code when it outgrows being a single project? At some point, you start thinking about separating your code into separate libraries or projects, and importing between them as necessary. How would you organize things for that? or would you avoid the temptation and just manage all packages under the same sbt and git single "project"?

Points of interest being: (feel free to change)

  • Avoiding inventing new "headaches" that over-engineer imaginary needs.
  • Still being able to easily build everything when you still want to, on a given dev machine or a CI server.
  • Packaging for production: being able to use SbtNativePackager to package your stuff for production without too much pain.
  • Easily control which version of each library you use on a given dev machine, and being able to switch between them seamlessly.
  • Avoiding git manipulation becoming worse than it basically typically is.

In addition, would you use some sort of "local sbt/maven team repository" and what may need to be done to accomplish that? hopefully, this is not necessary though.

Thanks!

by matt at October 21, 2014 01:59 PM

Define recursive references - lazy val causes stack overflow

For a data-flow scenario I need values that recursively reference each other. The following doesn't work:

class Foo(val that: Foo)

class Bar {
  lazy val a: Foo = new Foo(b)
  lazy val b: Foo = new Foo(a)
  println(s"a = $a, b = $b")
}

new Bar  // boom!

How would I solve this without getting my hands dirty with a var?

by 0__ at October 21, 2014 01:58 PM

/r/compsci

CompsciOverflow

Three phase commit : study case

Consider a group of five processors implementing three-phase commit protocol. During the execution of the protocol, the coordinator and one other process crash. Two of the remaining processes are waiting in "READY" state for the coordinator while the third process is in "PREPARE COMMIT" state. Can they continue and complete the protocol without waiting for recovery of the crashed processes? Do they COMMIT or ABORT? Which state can the crashed process be in?

In my opinion, the crashed processor maybe was in the "ABORT" state, so how can the other processors know the "global decision"?

by Fabrizio at October 21, 2014 01:43 PM

StackOverflow

Compilation failed: error while loading AnnotatedElement, ConcurrentMap, CharSequence from Java 8 under Scala 2.10?

I'm using the following:

  • Scala 2.10.4
  • Scalatra 2.2.2
  • sbt 0.13.0
  • java 1.8.0
  • casbah 2.7.2
  • scalatra-sbt 0.3.5

I'm frequently running into this error:

21:32:00.836 [qtp1687101938-55] ERROR o.fusesource.scalate.TemplateEngine - Compilation failed:
error: error while loading CharSequence, class file '/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)' is broken
(class java.lang.RuntimeException/bad constant pool tag 18 at byte 10)
error: error while loading ConcurrentMap, class file '/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/util/concurrent/ConcurrentMap.class)' is broken
(class java.lang.RuntimeException/bad constant pool tag 18 at byte 61)
two errors found
21:38:03.616 [qtp1687101938-56] ERROR o.fusesource.scalate.TemplateEngine - Compilation failed:
error: error while loading AnnotatedElement, class file '/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/reflect/AnnotatedElement.class)' is broken
(class java.lang.RuntimeException/bad constant pool tag 18 at byte 76)
one error found

Currently I'm running into this when simply trying to call a .count() on my MongoDB collection.

Upon Googling, it seems like it may be caused by dependency issues. The thing is, I'm using Scalatra just to serve an API and actually don't require any of the scalate stuff. I commented out all references to it, but I still get this. Could it be a dependency issue between the libraries I'm using?

Any help appreciated. Thanks!

by jnfr at October 21, 2014 01:42 PM

Scala Collection Specific Implementation

Say I have some data in a Seq in Scala 2.10.2, e.g:

scala> val data = Seq( 1, 2, 3, 4, 5 )
data: Seq[Int] = List(1, 2, 3, 4, 5)

Now, I perform some operations and convert it to a Map

scala> val pairs = data.map( i => i -> i * 2 )
pairs: Seq[(Int, Int)] = List((1,2), (2,4), (3,6), (4,8), (5,10))

scala> val pairMap = pairs.toMap
pairMap: scala.collection.immutable.Map[Int,Int] = Map(5 -> 10, 1 -> 2, 2 -> 4, 3 -> 6, 4 -> 8)

Now say, for performance reasons, I'd like pairMap to use the HashMap implementation of Map. What's the best way to achieve this?

Ways I've considered:

  1. Casting:

    pairMap.asInstanceOf[scala.collection.immutable.HashMap[Int,Int]]
    

    This seems a bit horrible.

  2. Manually converting:

    var hm = scala.collection.immutable.HashMap[Int,Int]()
    pairMap.foreach( p => hm += p )
    

    But this isn't very functional.

  3. Using the builder

    scala.collection.immutable.HashMap[Int,Int]( pairMap.toSeq:_* )
    

    This works, but it's not the most readable piece of code.

Is there a better way that I'm missing? If not, which of these is the best approach?

by paulmdavies at October 21, 2014 01:39 PM

How can I say "all these tasks should have this tag" when including yml files?

In my roles/common/main.yml I have - include ruby2.yml. That file has 3-4 tasks and each one has tags: ruby2. Works fine, but feels repetitive. The documentation says that when doing the include I could write it like this: - include: ruby2.yml tags=ruby2

But that puts the responsibility outside of the file itself which bugs me for some reason.

Is there a way, within ruby2.yml to say "all of these tasks should have the 'ruby2' tag?"

by Philip Hallstrom at October 21, 2014 01:32 PM

CompsciOverflow

What if traversal order of two of pre-order, in-order and post-order is same?

Supposing a tree T has the identical:

  1. Pre-order traversal and in-order traversal
  2. Pre-order traversal and post-order traversal
  3. In-order traversal and post-order traversal

How does T look like?

From the previous discussion in SO, I learned that pre- and post-order are same iff. only one node is in the T.

How about the left two conditions?

I suppose that only purely left-skewed tree can make post- and in-order the same, and only right-skewed tree can make pre- and in-order the same.

Am I on the right way?


Edit: To avoid being solely a 'yes' or 'no' problem, the actual doubt is that how to prove this statement, but I have a idea about the proofs using contradiction. So maybe this problem is a little redundant.

by Zhen Zhang at October 21, 2014 01:31 PM

StackOverflow

clojure partition list into equal piles

I'm looking for a way to implement equal piles in a list that can take a list of N elements and split it in to M piles. Any remainders are added one at a time to each pile. I feel like there might be something already out there.

List:  [1 2 3 4 5 6 7 8 9]
M = 5

[[1] [2] [3] [4] [5]]; divided into equal piles with   remainder [6 7 8 9]

[[1 6] [2 7] [3 8] [4 9] [5]]; output

But the actual numbers in each pile I don't really care about. As long as the (count element) is +/- of all others.

I found partion-all, but it doesn't deal with the remainder the way I need to, and I couldn't get the program to take the last element of the generated list and stick it in to the previous piles.

by user1639926 at October 21, 2014 01:29 PM

Action(parser.json) vs Action.async Error, and using concurrent.Execution.Implicits made could not initialize class controllers in Play Scala

I'm trying to create a post request for insert a data to mongoddb using : 1. sbt 0.13.6 2. play 2.10 3. scala 2.11.2 4. play2-reactivamongo 0.10.2 5. mongodb 2.6.4

data post by json, and create a case class for the model, and using JSPath to convert the json to entity class.

this is my sample code :

def inserTransaction = Action(parser.json) { implicit request =>

   val json = request.body
   val data = json.as[Transaction]
   Logger.info(data.toString)
   val future = collection.insert(data.copy(id = Option[BSONObjectID](BSONObjectID.generate)))
   var result = ""

   future.onComplete {
     case Failure(t) => result = "An error has occured: " + t.getMessage
     case Success(post) => result = "success"
   }
   Ok(result)
}

I've seen some sample code that used Action.sync for handling asynchronous in controllers, but when i try to use Action.sync, my Intellij IDE detect an error "cannot resolve Action.sync as signature", i've tried to change the result of function like this

future.onComplete {
    case Failure(t) => Ok("An error has occured: " + t.getMessage)
    case Success(post) => Ok("success")
  }

So I decided to use Action(parser.json) , but the issue that came from activator play is tell me that i should use "import play.api.libs.concurrent.Execution.Implicits._" in my code. But when i import the libraries, it came a new error :

 ! Internal server error, for (POST) [/insertdata] ->

java.lang.ExceptionInInitializerError: null ....

Caused by: play.api.PlayException: ReactiveMongoPlugin Error[The ReactiveMongoPlugin has not been         
initialized! Please edit your conf/play.plugins file and add the following line....

when i tried to reload the request, it showed another error :

! Internal server error, for (POST) [/api/insertdata] ->

java.lang.NoClassDefFoundError: Could not initialize class controllers.TransactionController$

[error] application - Error while rendering default error page
scala.MatchError: java.lang.NoClassDefFoundError: Could not initialize class 
controllers.TransactionController$ (of class java.lang.NoClassDefFoundError)

Anyone have any solution for my problem?

by situkangsayur at October 21, 2014 01:23 PM

Why "val a=-1" doesn't work in scala?

I found val a = -1 works well in scala REPL, but if I skip the space around the = like val a=-1, the expression doesn't return the result.

Does anyone have ideas about this? Why the space arount the = is necessary here?

by Firegun at October 21, 2014 01:09 PM

Scala - why fail to override superclass's method

  class A
  class B extends A

  class D { def get: A = ??? }
  class E extends D { override def get: B = ??? } // OK

  class F { def set(b: B): Unit = ??? }
  class G extends F { override def set(a: A): Unit = ??? } // Compile Error, override nothing

my question is why G doesn't work given that: (A=>Unit) is subtype of (B=>Unit)

implicitly[(A => Unit) <:< (B => Unit)]

by chenhry at October 21, 2014 01:04 PM

Private Communication between client and server using websockets play 2.3.x

I am newbie to scala and can't figure out sending private message to client using websocket.

Here is my controller:

object Client extends Controller {
  def socket(uuid: String) = WebSocket.acceptWithActor[String, String] { request =>
    out => ClientWebSocket.props(uuid)
  }
 // Test Method to send message to websocket connected client
  def sendMessage(guid: String) = Action { implicit request =>
    val system = ActorSystem("default")
    val out = system.actorOf(Props(classOf[ClientWebSocket], guid))
    out ! SendUpdate("Message Recieved")
    Ok
  }
}

Here is my actor class:

object ClientWebSocket {
  def props(uuid: String) = Props(new ClientWebSocket(uuid))
  case class SendUpdate(msg:String)

}

class ClientWebSocket(uuid: String) extends Actor {
  import ClientWebSocket._

  def receive = {
    case SendUpdate(msg:String) =>
      sender ! "Message is " + msg
  }
}

When I call sendMessage with uuid of client, I am getting akka dead letters encountered error. Any help is really appreciated.

by dj123 at October 21, 2014 12:57 PM

How can a process inquire, when it was started?

Is there a call, that can be used to ask the OS, when the current process started?

Of course, one could simply call gettimeofday() at start-up and refer to that once-recorded value through the life of the process, but is there another option?

Obviously, the OS keeps the record for each process (one can see it in the output of ps, for example). Can it be queried by the process itself (using C)?

An ideal solution would, of course, be cross-platform, but something (Free)BSD-specific is fine too. Thanks!

Update: I've come up with a BSD-specific implementation, that uses sysctl(3) to obtain the kern_proc structure of the current process and finds the ki_start field in there. If nobody suggests anything better in a few days, I'll post my own function here for posterity...

by Mikhail T. at October 21, 2014 12:46 PM

What is wrong with my Clojure implementation of permutations [duplicate]

This question already has an answer here:

So this may be the problem that makes me give up on Clojure and return to haskell. I tried creating a DCG (Definite Clause Grammar) in Clojure using Core.Logic to solve this problem and after 5 hours I gave up ever using Core.Logic - DCG again.

Now, trying the traditional way: I tried a list comprehension (commented out) and using a concat map. I have done something similar in haskell and it worked fine. I don't know why I can't figure out how to solve this problem using clojure. Or why I constantly get conj, cons, and concat confused.

(defn remove-item [xs]
   (remove #{(first xs)} xs )
)

(defn permutation [xs]

  (if (= (count xs) 1)
      xs

     ;(for [x xs y (permutation (remove-item xs))
     ;          :let [z (map concat y)]]
     ;          z)                    

     (mapcat #(map first (permutation (remove-item %)) ) xs)

  )
)

What am I missing?

@amalloy: see comment.

by stevemacn at October 21, 2014 12:45 PM

Jinja2 in Ansible Playbook

how can i loop in an ansible playbook over gathered facts? I tried:

...  
haproxy_backends:
  - name: 'backend'
    servers:
      {% for host in groups['app-servers'] %}
        - name: "{{ hostvars[host]['ansible_hostname'] }}"
          ip: "{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}"
      {% endfor %}

But this doesn't work, it results in a Syntax Error. Is it even possible to use jinja in a playbook?

I use a ansible galaxy role (info.haproxy) and i don't want to change the provided templates.

by Juri Glass at October 21, 2014 12:45 PM

routes is already defined as object routes [on hold]

I am getting a routes is already defined as object routes error, before that got a error in routes file cleared that error, but after compiling am getting this error can any one help me how solve this.

by Karthik at October 21, 2014 12:38 PM

/r/compsci

StackOverflow

Limitations of let rec in OCaml

I'm studying OCaml these days and came across this:

OCaml has limits on what it can put on the righthand side of a let rec. Like this one

let memo_rec f_norec =
let rec f = memoize (fun x -> f_norec f x) in
f;; 
Error: This kind of expression is not allowed as right-hand side of `let rec'

in which, the memoize is a function that take a function and turns it into a memorized version with Hashtable. It's apparent that OCaml has some restriction on the use of constructs at the right-hand side of 'let rec', but I don't really get it, could anyone explain a bit more on this?

by David Lau at October 21, 2014 12:25 PM

Zipkin failing to start

I am trying to install Zipkin on CentOS.

When I try to run bin/collector, I get the following errors:

[info] Loading project definition from /home/vagrant/zipkin/project
[warn] Multiple resolvers having different access mechanism configured with same name 'local'. To avoid conflict, Remove duplicate project resolvers (`resolvers`) or rename publish
ing resolver (`publishTo`).
[info] Set current project to zipkin (in build file:/home/vagrant/zipkin/)
[info] Set current project to zipkin-collector-service (in build file:/home/vagrant/zipkin/)
[info] Writing build properties to: /home/vagrant/zipkin/zipkin-collector-service/target/resource_managed/main/com/twitter/zipkin/build.properties
[info] Packaging /home/vagrant/zipkin/zipkin-collector-service/target/zipkin-collector-service-1.2.0-SNAPSHOT.jar ...
[info] Done packaging.
[info] Running com.twitter.zipkin.collector.Main -f zipkin-collector-service/config/collector-dev.scala
[error] Sep 24, 2014 12:13:42 PM com.twitter.zipkin.collector.Main$ main
[error] INFO: Loading configuration
[error] INF [20140924-12:13:52.285] stats: Starting LatchedStatsListener
[error] 700 [20140924-12:13:52.336] net: HttpServer created http 0.0.0.0/0.0.0.0:9900
[error] 700 [20140924-12:13:52.349] net: context created: /
[error] 700 [20140924-12:13:52.350] net: context created: /report/
[error] 700 [20140924-12:13:52.351] net: context created: /favicon.ico
[error] 700 [20140924-12:13:52.353] net: context created: /static
[error] 700 [20140924-12:13:52.355] net: context created: /pprof/heap
[error] 700 [20140924-12:13:52.356] net: context created: /pprof/profile
[error] 700 [20140924-12:13:52.358] net: context created: /pprof/contention
[error] 700 [20140924-12:13:52.359] net: context created: /tracing
[error] 700 [20140924-12:13:52.361] net: context created: /health
[error] 700 [20140924-12:13:52.361] net: context created: /quitquitquit
[error] 700 [20140924-12:13:52.362] net: context created: /abortabortabort
[error] 700 [20140924-12:13:52.368] net: context created: /graph/
[error] 700 [20140924-12:13:52.370] net: context created: /graph_data
[error] INF [20140924-12:13:52.372] admin: Starting TimeSeriesCollector
[error] INF [20140924-12:13:52.373] admin: Admin HTTP interface started on port 9900.
[error] INF [20140924-12:13:52.375] builder: Building 1 stores: List(<function0>)
[error] INF [20140924-12:13:52.406] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.407] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.407] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.408] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.410] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.415] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.416] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.417] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.417] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.418] collector: Starting WriteQueueWorker
[error] INF [20140924-12:13:52.428] builder: Starting collector service on addr /0.0.0.0:9410
[error] INF [20140924-12:13:52.724] twitter: Finagle version 6.16.0 (rev=cb019fbe670d16dc8076494e315b4a8a6aa53111) built at 20140515-141056
[error] DEB [20140924-12:13:53.005] nio: Using select timeout of 500
[error] DEB [20140924-12:13:53.012] nio: Epoll-bug workaround enabled = false
[error] DEB [20140924-12:13:53.873] twitter: LoadService: loaded instance of class com.twitter.finagle.stats.OstrichStatsReceiver for requested service com.twitter.finagle.stats.St
atsReceiver
[error] 700 [20140924-12:13:53.932] net: context created: /config/sampleRate

I have installed Java 7 and Scala.

Note: These errors are from a second run of bin/collector. The first run downloaded libraries, compiled the scala files and then displayed the erorrs, however they were the same errors.

by mangusbrother at October 21, 2014 12:21 PM

How to view scala doc in eclipse

I am using Eclipse for writing Scala code.

I installed the Scala plugin in Eclipse. In the Java environment in Eclipse there are explanations available for every built in method, but for Scala, Eclipse does not show Scala doc.

What can I do to view the Scaladoc in Eclipse?

by user3801239 at October 21, 2014 11:59 AM

spray-json JsonFormat case classes

I'm facing this problem trying to implement a JsonFormat object for a case class that is Generic. This is my class:

case class SimpleQuery[T](field : String, op : Operator, value : T) extends Query{
  def getType = ????
}

I'm trying to use the Format that the github page of spray json hints like this :

implicit def SimpleQueryJsonFormat[A <: JsonFormat] = jsonFormat4(SimpleQuery.apply[A])

But I get this compiler error

trait JsonFormat takes type parameters

The example from spray-json github page is the following:

case class NamedList[A](name: String, items: List[A])

object MyJsonProtocol extends DefaultJsonProtocol {
  implicit def namedListFormat[A :JsonFormat] = jsonFormat2(NamedList.apply[A])
}

That seems really similar to mine.

I'll also open an issue in the github page.

Thank you in advance

by tmnd91 at October 21, 2014 11:48 AM

QuantOverflow

How to price a Swing Option?

I'm working in the commodity market and I've to price Swing Options with MATLAB, preferably with finite element.

Has anyone already priced these kind of derivatives?

I'm thinking about using the structure for the pricing of an American Option and then do it iteratively.

More details about Swing Options are included in this paper.

Note that swing options are really useful in commodity markets because you can exercise them more than once (like American options); obviously there are some constraints that limit you.

I've already tried to price them with Least Squares Monte Carlo method (using the algorithm presented by Longstaff and Schwartz).

Now I want to price them with finite element but I'm having some difficulties. In particular I'm pricing them without jumps, so I'm using an EDP discretized (and not a PIDE).

I'd like to know if anyone already implemented such a thing?

by alberto at October 21, 2014 11:42 AM

Data on banks’ leverage

Does someone know free resources to estimate the leverage of the banking and financial sector at an aggregate level? In particular I would be interested in something like Federal Reserve’s Flow of Funds for the following regions:

-Europe (Austria, Belgium, Denmark, Finland, France, Germany, Greece, Ireland, Italy, the Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, and the United Kingdom)

-Japan

-Asia Pacific (Australia, Hong Kong, New Zealand, and Singapore)

-North America (Canada, United States)

Thanks

by franic at October 21, 2014 11:38 AM

Testing the validity of a factor model for stock returns

Consider the following m regression equation system:

$$r^i = X^i \beta^i + \epsilon^i \;\;\; \text{for} \;i=1,2,3,..,n$$

where $r^i$ is a $(T\times 1)$ vector of the T observations of the dependent variable, $X^i$ is a $(T\times k)$ matrix of independent variables, $\beta^i$ is a $(k\times1)$ vector of the regression coefficients and $\epsilon^i$ is the vector of errors for the $T$ observations of the $i^{th}$ regression.

My question is: in order to test the validity of this model for stock returns (i.e. the inclusion of those explanatory variables) using AIC or BIC criterion, should these criterion be computed on a time-series basis (i.e. for each stock), or on a cross-sectional basis (and then averaged over time)?

by Mariam at October 21, 2014 11:23 AM

StackOverflow

Scala parallel collections, threads termination, and sbt

I am using parallel collections, and when my application terminates, sbt issues:

Not interrupting system thread Thread[process reaper,10,system]

It issues this message one time per core (minus one to be precise).

I have seen in sbt code that this is by design, but I am not sure why don't the threads terminate along with my application. Any insight would be appreciated if you were unlucky enough to come across the same...

by matt at October 21, 2014 11:12 AM

Lobsters

Planet Clojure

The perfect match

A talk about pattern matching by János Erdős

by Clojure Budapest at October 21, 2014 11:08 AM

/r/netsec

StackOverflow

How to find out allowed options for the Clojure-function (spit)?

The Clojure-function spit allows to write data into files, e.g.:

(spit "filename.txt" "content")

It also allows to add content to existing files.

(spit "filename.txt" "content" :append true)

In the documentation ((doc spit)) it only says that options can be passed to the clojure.java.io/writer. But (doc clojure.java.io/writer) does not list allowed options. So is there a "detailed-mode" for documentation available?

I found the :append-option via http://clojuredocs.org/clojure.core/spit , but I'm sure it is also listed somewhere in the documentation.

by Edward at October 21, 2014 10:56 AM

Play Framework return result from Twitter Future

I have a next code, this return true, when number is 0, otherwise exception

  import com.twitter.util.Future 

  def compute(x: Int): Future[Boolean] = {
     if (x == 0) {
       Future.value(true)
     } else {
       Future.value(new Exception("Invalid number"))
     }
  }  

And my controller for work with this code:

 object MyController extends Controller {

   def get(x: Int) = Actrion {
     compute(x).flatMap {
       case x: Boolean => Ok(views.html.ok("ok"))
       case _ => NotFound 
     }
   }  
 }

but when i run this code, i get type mismatch; found : play.api.mvc.Result required: com.twitter.util.Future[?]

How to extract value from Future and pass as result to response?

by lito at October 21, 2014 10:55 AM

Getting started with Haskell

For a few days I've tried to wrap my head around the functional programming paradigm in Haskell. I've done this by reading tutorials and watching screencasts, but nothing really seems to stick. Now, in learning various imperative/OO languages (like C, Java, PHP), exercises have been a good way for me to go. But since I don't really know what Haskell is capable of and because there are many new concepts to utilize, I haven't known where to start.

So, how did you learn Haskell? What made you really "break the ice"? Also, any good ideas for beginning exercises?

by anderstornvig at October 21, 2014 10:44 AM

How to generate a well-formatted output of (all-ns) in Clojure?

I'd like to view the list of all namespaces. Therefore I use (all-ns), which prints out a long list of namespaces.

Instead of having one namespace after another, I'd like to have each namespace in its own line. So, how can I printout a list in a way, that each item of the list is in its own line?

by Edward at October 21, 2014 10:43 AM

Ansible ad-hoc commands don't work with Cisco devices

I have newly installed Ubuntu server with Ansible. I try to use Ansible in my network, but it fails for me just from the beginning

10.102.249.3 is a router

zab@UbuntuSrv:/etc/ansible$ ansible 10.102.249.3 -a "conf t" --ask-pass -vvv       
SSH password: 
<10.102.249.3> ESTABLISH CONNECTION FOR USER: zab
<10.102.249.3> REMOTE_MODULE command conf t
<10.102.249.3> EXEC ['sshpass', '-d6', 'ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/zab/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=22', '-o', 'GSSAPIAuthentication=no', '-o', 'PubkeyAuthentication=no', '-o', 'ConnectTimeout=10', '10.102.249.3', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210 && echo $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210'"]
<10.102.249.3> PUT /tmp/tmpZUkRET TO Line has invalid autocommand "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210 && echo $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210'"/command
10.102.249.3 | FAILED => failed to transfer file to Line has invalid autocommand "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210 && echo $HOME/.ansible/tmp/ansible-tmp-1412930091.8-230458979934210'"/command:

Connection to 10.102.249.3 closed by remote host.
Connection closed

zab@UbuntuSrv:/etc/ansible$ ansible 10.102.249.3 -m ping  --ask-pass -vvv         
SSH password: 
<10.102.249.3> ESTABLISH CONNECTION FOR USER: zab
<10.102.249.3> REMOTE_MODULE ping
<10.102.249.3> EXEC ['sshpass', '-d6', 'ssh', '-C', '-tt', '-q', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/home/zab/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=22', '-o', 'GSSAPIAuthentication=no', '-o', 'PubkeyAuthentication=no', '-o', 'ConnectTimeout=10', '10.102.249.3', "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532 && echo $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532'"]
<10.102.249.3> PUT /tmp/tmpOPuOWh TO Line has invalid autocommand "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532 && echo $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532'"/ping
10.102.249.3 | FAILED => failed to transfer file to Line has invalid autocommand "/bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532 && echo $HOME/.ansible/tmp/ansible-tmp-1412930136.7-170302836431532'"/ping:

Connection to 10.102.249.3 closed by remote host.
Connection closed

Update: What is wrong with my playbook? I get ERROR: raw is not a legal parameter at this level in an Ansible Playbook

---
- hosts: testsw
  remote_user: zab
  tasks: 
  - name: copy tftp run
    raw: copy tftp://10.1.78.153/test running-config

by Coul at October 21, 2014 10:32 AM

System tray icon is not responsive

When I open my application then the application is waiting for a connection to the server, I have done that by calling a slot run() which waits for a acknowledgement packet from server and when it receives it then it hides "Waiting for connection" string and loads other things. The problem is that when it waits for a packet then the system tray icon is not responding to anything, when the server sends packet and application loads then the system tray icon starts responding (for right-click menu).

I am using ZeroMQ for IPC.

I have something like this:

int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    MainWindow w;
    w.show(); 

    //THIS PART
    QTimer::singleShot(2000,&w,SLOT(run()));

    return a.exec();
}

by user3840048 at October 21, 2014 10:30 AM

/r/compsci

I love programming, but I'm not sure whether I should pick CS in uni.

I'm worried that jobs would harder to find by the time I graduate because almost everyone is graduating with a CS degree nowadays. I would love a job in the Software industry but I feel that because of all these new graduates, the jobs are going to get harder to find by the time I graduate.

submitted by foliomark
[link] [2 comments]

October 21, 2014 10:12 AM

CompsciOverflow

Principles of Programming Languages: Understanding Judgements

I am taking a principles of programming languages class right now and am trying to understand the following judgement form.

n' = -toNumber(v)
------------------
-v --> n'

(Sorry, I can't post pictures yet. And Stack doesn't take LaTeX.) I think it means "n' = -v implies that -v maps to n' " or something along those lines. I guess I really just don't know what the --> means. In math it can either mean "maps to" or "implies" and "maps to" just made more sense.

by steveclark at October 21, 2014 10:10 AM

StackOverflow

Transform a std::vector of boost.asio::ip::address via boost::algorithm::join and boost::adaptors::transformed

short question. I do not know how to properly use boost::adaptors::transformed with boost::algorithm::join. The following does not work:

boost::algorithm::join(addresses |
                       boost::adaptors::transformed(std::mem_f(&boost::asio::ip::address_v4::to_string)), ", ");

I do not understand the syntax of boost::adaptors::transformed. How do I call the memeber function for each object in the std::vector?

Currently im concatenating the string manually, but I would prefer the functional approach outlined above.

Thanks.

by Florian Wolters at October 21, 2014 10:07 AM

Planet Clojure

Fefe

Die Panikmache funktioniert. Der Juranachwuchs fordert ...

Die Panikmache funktioniert. Der Juranachwuchs fordert härtere Strafen. Die folgen halt blind ihren Vorbildern aus der Politik. Und das sind ja auch fast alles Juristen, wenn es keine Lehrer sind.

Die konkreten Zahlen sind einigermaßen erschütternd. Ein Drittel befürwortet die Todesstrafe, die Hälfte findet Folter angemessen.

October 21, 2014 10:01 AM

StackOverflow

C++ LINQ-like iterator operations

Having been tainted by Linq, I'm reluctant to give it up. However, for some things I just need to use C++.

The real strength of linq as a linq-consumer (i.e. to me) lies not in expression trees (which are complex to manipulate), but the ease with which I can mix and match various functions. Do the equivalents of .Where, .Select and .SelectMany, .Skip and .Take and .Concat exist for C++-style iterators?

These would be extremely handy for all sorts of common code I write.

I don't care about LINQ-specifics, the key issue here is to be able to express algorithms at a higher level, not for C++ code to look like C# 3.0. I'd like to be able to express "the result is formed by the concatenation first n elements of each sequence" and then reuse such an expression wherever a new sequence is required - without needed to manually (and greedily) instantiate intermediates.

by Eamon Nerbonne at October 21, 2014 09:53 AM

Combining two vectors (filling up containers with the contents of several cans)

I have two vectors

(def container [{:no 1 :volume 10} {:no 2 :volume 20}])
(def cans [{:no 1 :volume 2} {:no 2 :volume 8} {:no 1 :volume 5} {:no 2 :volume 8}])

I'd like to fill up the containers with the cans so as to return something like this:

[{:no 1 :volume 10
  :cans [{:no 1 :volume 2} {:no 2 :volume 8}]}
 {:no 2 :volume 20
  :cans [{:no 1 :volume 5} {:no 2 :volume 8}]}]

thereby keeping track of which can went into which container. I started by using reduce but cannot get my head around how to do this without using a mutating store for holding the remaining cans. Any ideas?

UPDATE

By fill up, i meant pack in as many cans in the first container until it's full or as near as (the sum of the can's volumes not exceeding that of the container's volume), then start filling up the second container until it's full or as near as, and so on.

by user2245369 at October 21, 2014 09:51 AM

Slick 2.1.0, "No matching Shape found" with mapped projection, lick does not know how to map the given types

I'm sorry, but bloqued with a mapped projection since too much time.

case class Token(val value:Array[Byte], val expires:Date=new Date(), val kind:String = "Bearer")

object Token {
  private def generate = // ...
}

class Tokens(tag:Tag) extends Table[Token](tag, "tokens") {

  implicit val dateColumnType = MappedColumnType.base[java.util.Date, Long](
    _.getTime, new java.util.Date(_))

  val accounts = TableQuery[Accounts]

  def value   = column[Array[Byte]]("value", O.NotNull)
  def expires = column[Date]("expires", O.NotNull)
  def kind    = column[String]("kind", O.NotNull)
  def accountKey = column[UUID]("account_key", O.NotNull)

  def * = (value, expires, kind) <> (
    (r:(Array[Byte], Long, String)) => Token(r._1, new Date(r._2), r._3), // From row
    (m:Token) => Some((m.value, m.expires.getTime, m.kind)) // From model
  )

  def fk_account = foreignKey("fk_account", accountKey, accounts)(_.key)

}

I'have tried with simpler type for columns value(String) and expires(Long) but the problem remain the same (more or less).

[error] ..\app\models\Tokens.scala:51: No matching Shape found.
[error] Slick does not know how to map the given types. 
[error] Possible causes: T in Table[T] does not match your * projection. Or you use an unsupported type in a Query (e.g. scala List).
[error] Required level: scala.slick.lifted.FlatShapeLevel
[error]      Source type: (scala.slick.lifted.Column[Array[Byte]], scala.slick.lifted.Column[java.util.Date], scala.slick.lifted.Column[String]) 
[error]    Unpacked type: (Array[Byte], Long, String)
[error]      Packed type: Any 
[error]   def * = (value, expires, kind) <> ( 
[error]                                  ^

I understand the problem but not enough since I cannot fix it. Can someone explain me how to solve it and why ?

Thanks a lot

by Anonymous at October 21, 2014 09:41 AM

ZMQ python socket from context catch exception

I have NGINX server with uWSGI and python with PyZMQ (installed as sudo pip install pyzmq).

I'm trying create socket from ZMQ context, but always catch exception.

import zmq
import os
import sys
from cgi import parse_qs, escape

sys.path.append('/usr/share/nginx/www/application')
os.environ['PYTHON_EGG_CACHE'] = '/usr/share/nginx/www/.python-egg'

def application(environ, start_response): 
    ctx = zmq.Context()         

    try: 
        message = 'Everything OK'
        s = ctx.socket(zmq.REQ) 
    except Exception as e: 
        message = "Exception({0}): {1}".format(e.errno, e.strerror) 
        pass 

    response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(message)))] 
    start_response('200 OK', response_headers); 
    return [message]

It raised exception

Exception(14): Bad address

If I commented line

s = ctx.socket(zmq.REQ)

then is everything ok.

I searched on internet, but nobody has same problem.

Please, do you have any idea, what am I doing wrong?

Edit:

I wrote simple python script, that working and I get the response from recv:

import zmq
import os
import sys

print 'Create zeromq instance...'

ctx = zmq.Context()
print 'Create socket ...'

try: 
    s = ctx.socket(zmq.REQ)
except Exception as e: 
    print "Exception({0}): {1}".format(e.errno, e.strerror) 
    sys.exit()

s.connect('tcp://localhost:5555')
s.send('fTtt;')
message = s.recv()

print message 

I seems to be a problem with uWSGI run python ZMQ, but why?

by mkxqiu at October 21, 2014 09:41 AM

Planet Clojure

October 2014 London Clojure Dojo at ThoughtWorks

When:
Tuesday, October 28, 2014 from 7:00 PM to 10:00 PM (GMT)

Where:
ThoughtWorks London Office
173 High Holborn
WC1V London
United Kingdom

Hosted By:
London Clojurians

Register for this event now at :
http://www.eventbrite.com/e/october-2014-london-clojure-dojo-at-thoughtworks-tickets-13853463081?aff=rss

Event Details:

London Clojure Dojo at ThoughtWorks

The goal of the session is to help people learn to start working with Clojure through practical exercises, but we hope that more experienced developers will also come along to help form a bit of a London Clojure community. The dojo is a great place for new and experienced clojure coders to learn more. If you want to know how to run your own dojo or get an idea of what dojos are like you can read more here.

 

We hope to break up into groups for the dojo. So if you have a laptop with a working clojure environment please bring it along.

 

We’ll be discussing the meetup on the london-clojurians mailing list

 

Clojure is a JVM language that has syntactically similarities to Lisp, full integration with Java and its libraries and focuses on providing a solution to the issue of single machine concurrency.

 

Its small core makes it surprisingly easy for Java developers to pick up and it provides a powerful set of concurrency strategies and data structures designed to make immutable data easy to work with. If you went to Rich Hickey’s LJC talk about creating Clojure you’ll already know this, if not it’s well worth watching the Rich Hickey “Clojure for Java Programmers” video or Stuart Halloway “Radical Simplicity” video.


by London Clojurian Events at October 21, 2014 09:38 AM

StackOverflow

Preserve type/class tag among akka messages

I have the situation where I want to preserve information about some generic type passed within a message to be able to create another generic class with that same type within receive method responsible for processing the message.

At first glance I thought TypeTag is my best friend here, but, after trying that out it seems this is not the best possible solution, or not solution at all. Let me first explain what I have at the moment and what is the outcome.

Message case class

trait MessageTypeTag[T] {
  def typeTag: TypeTag[T]
}

case class Message[T](id: Int, payload: T, helper: MyClass[T], 
                      cond: Condition[MyClass[T]])(implicit val typeTag: TypeTag[T])
           extends MessageTypeTag[T]

Receive method

def receive() = {
  case m@Message(id, payload, helper, cond) => {
    // this prints a proper type tag, i.e. String, because type is known in the runtime
    println(m.typeTag.tpe)

    // compiler complains here because it sees m.typeTag as TypeTag[Any], i.e. exact
    // type is not known in the compile time
    val temp = new MyClass2[m.typeTag.tpe](...)
 }
}

Dirty solution After reading several articles, discussions, documentation on both Scala and akka I come up with some dirty solution by putting the (call to) factory method case class.

case class Message[T](id: Int, payload: T, helper: MyClass[T], 
                      cond: Condition[MyClass[T]])(implicit val typeTag: TypeTag[T])
           extends MessageTypeTag[T] {
  def getMyClass2: MyClass2[T] = {
    // instantiate an object of type T
    val bla = typeTag.mirror.runtimeClass(typeTag.tpe).newInstance.asInstanceOf[T]
    // we can call apply here to populate created object or do whathever is needed
    ...
    // instantiate MyClass2 parametrized with type T and return it
    new MyClass2[T](Some(bla))
  }
}

As you can see this is far from good solution/design because this case class is all but lightweight and actually defeats the purpose of case class itself. It can be improved in a way that reflection call is not coded here but in some external factory which is just called within case class, but I have a feeling there must be a better approach to accomplish this.

Any suggestion would be very appreciated. If there are some more information needed, I can provide it.

And, I believe, similar problem/solution has been described here, but I'm wondering is there a better way. Thanks.

by htomek at October 21, 2014 09:28 AM

Cant use change play plugins in test running(fakeApp)

Im trying to test with different mocked plugins in place of the real one. It works fine with one mock, but when I have two it always uses the first one.

class UserSpec extends Specification {
  "User" should {
    val fakeAppA = new FakeApplication(
      additionalPlugins = Seq(
        "FakeServiceA"
        )
      )

    "have FakeServiceA " in running(fakeAppA) {
      UserController.doit() === "FakeServiceA"
    }

    val fakeAppB = new FakeApplication(
      additionalPlugins = Seq(
        "FakeServiceB"
        )
      )

    "have FakeServiceB" in running(fakeAppB) {
      // doesnt work gets FakeServiceA
      UserController.doit() === "FakeServiceB"
    }
  }
}

object UserController extends Controller {
  val service = Play.application.plugin[Service]
    .getOrElse(throw new RuntimeException("Service not loaded"))

  def doit() = service.serviceIt()
}

class Service(app: Application) extends Plugin {
  def serviceIt(): String = "Service"
}

class FakeServiceA(app: Application) extends Service(app) {
  override def serviceIt(): String = "FakeServiceA"
}

class FakeServiceB(app: Application) extends Service(app) {
  override def serviceIt(): String = "FakeServiceB"
}

by Stephen at October 21, 2014 09:25 AM

Scala list find and substraction

my scala list as below `enter code here

    List((192.168.11.3,A,1413876302036,-,-,UP,,0.0,0.0,12,0,0,Null0,UP,0,0,4294967295,other), (192.168.11.3,A,1413876302036,-,-,UP,,0.0,0.0,8,0,0,C,DOWN,0,0,100000000,P),  (192.168.1.1,A,1413876001775,-,-,UP,,0.0,0.0,12,0,0,E,UP,0,0,4294967295,other), (192.168.1.1,A,1413876001775,-,-,UP,,0.0,0.0,8,0,0,F,DOWN,0,0,100000000,E))

Now I want following operation, in list third parameter are changed in above is 1413876302036 and 1413876001775. I want to subtracts this as below

 val sub = ((192.168.11.3,A,(1413876302036-1413876001775),-,-,UP,,0.0,0.0,12,0,0,Null0,UP,0,0,4294967295,other),(192.168.1.1,A,(1413876001775-1413876001775),-,-,UP,,0.0,0.0,12,0,0,E,UP,0,0,4294967295,other))

how should this calculate in scala

by yogesh at October 21, 2014 09:21 AM

Get type from class name in Scala

I want to do something like the following:

val factoryType = typeOf[Class.
          forName("com.menith.amw.worksheets." + params("problem") + "ProblemFactory")]
val factory = parse(params("args")).extract[factoryType]

The parse method allows me to obtain an instance of a case class by giving it a JSON string and I can then use the extract method by passing it the expected type. However I'm having some issues getting the type from Class.forName.

by ajnatural at October 21, 2014 09:06 AM

CompsciOverflow

Is "duplicate" in RPN enough for replacing variable binding in term expressions?

I try to work out some consequences of storing (or "communicating"/"transmitting") a rational number by a term expression using the following operators: $0$, $\mathsf{inc}$, $\mathsf{add}$, $\mathsf{mul}$, $\mathsf{neg}$, and $\mathsf{inv}$. Here $\mathsf{add}$ and $\mathsf{mul}$ are binary operators, $\mathsf{inc}$, $\mathsf{neg}$, and $\mathsf{inv}$ are unary operators, and $0$ is a $0$-ary operator (i.e. a constant). Because I want to be able to also store numbers like $(3^3+3)^3$ efficiently, I need some form of variable binding. I will use the notation $(y:=t(x).f(y))$ to be interpreted as $f(t(x))$ in this question. Now I can store $(3^3+3)^3$ as $$(c3:=(1+1+1).x:=((c3*c3*c3)+c3).(x*x*x)).$$ If I stick to the operators $0$, $\mathsf{inc}$, $\mathsf{add}$, and $\mathsf{mul}$, this becomes $$(c3:=\mathsf{inc}(\mathsf{inc}(\mathsf{inc}(0))).x:=\mathsf{add}(\mathsf{mul}(\mathsf{mul}(c3,c3),c3),c3).\mathsf{mul}(\mathsf{mul}(x,x),x)).$$ Using RPN with a "duplicate" operation written $\mathsf{dup}$ instead of variable binding, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{dup}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{mul}\ \mathsf{add}\ \mathsf{dup}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{mul}.$$


My question is whether it is always possible to replace variable binding by the "duplicate" operation. The binary operations ($\mathsf{add}$ and $\mathsf{mul}$) are associative and commutative, but it seems to me that even this is not enough for ensuring that variable binding can be completely eliminated. Take for example $$(c2:=(1+1).(x:=(((c2+1)*c2)+1).(y:=(x*x).((y+c2)*y)))).$$ If I stick to the operators $0$, $\mathsf{inc}$, $\mathsf{add}$, and $\mathsf{mul}$, this becomes $$(c2:=\mathsf{inc}(\mathsf{inc}(0)).(x:=\mathsf{inc}(\mathsf{mul}(\mathsf{inc}(c2),c2)).(y:=\mathsf{mul}(x,x).\mathsf{mul}(\mathsf{add}(y,c2),y)))).$$ Using RPN with a "store" operation written $\mathsf{sto}(x)$ instead of variable binding, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{sto}(c2)\ c2\ \mathsf{inc}\ c2\ \mathsf{mul}\ \mathsf{inc}\ \mathsf{sto}(x)\ x\ x\ \mathsf{mul}\ \mathsf{sto}(y)\ y\ c2 \ \mathsf{add}\ y\ \mathsf{mul}.$$ After eliminating $\mathsf{sto}(x)$ and $\mathsf{sto}(y)$ by $\mathsf{dup}$, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{sto}(c2)\ c2\ \mathsf{inc}\ c2\ \mathsf{mul}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{dup}\ c2 \ \mathsf{add}\ \mathsf{mul}.$$ Using explicit substitution to eliminate $\mathsf{sto}(c2)$, this becomes $$0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{inc}\ \mathsf{mul}\ \mathsf{inc}\ \mathsf{dup}\ \mathsf{mul}\ \mathsf{dup}\ 0\ \mathsf{inc}\ \mathsf{inc}\ \mathsf{add}\ \mathsf{mul}.$$ My issue with explicit substitution is that it might lead to an exponential increase in the size of the expression. It's easy to see that expressions like $(3^3+3)^3$ or $((3^3+3)^3+3)^3$ can't be stored efficiently without something like $\mathsf{sto}(x)$ or $\mathsf{dup}$. Is there another way to eliminate $\mathsf{sto}(x)$, like an additional first-in, first-out queue? Or can one prove that an exponential blowup of the expression won't happen, if only explicit substitution and $\mathsf{dup}$ are "suitably" used together?

by Thomas Klimpel at October 21, 2014 09:02 AM

Portland Pattern Repository

DataTau

/r/netsec

CompsciOverflow

Solving a dynamic programming problem?

Alex writes down the decimal representations of all natural numbers between and including m and n, (m ≤ n). How many zeroes will he write down?

My one friend said to me that this problem can be solved by dynamic programming.But I can't understand how to solve it by dynamic programming .Can someone explain it with great details.Here is some input and output.

1.input:m=10, n=11 and output:1

2.input:m=100,n=200 and output:22

by Shahed al mamun at October 21, 2014 08:44 AM

StackOverflow

python decorators stacking

I have been trying to understand better the decorators and closures.

I am trying to decorate the function to achieve:

  • remembering previously passed values,
  • counting how many times the function was called.

I want to make it using two separate decorators - for science :)

So I managed to create this working code (I used some snippet for the counting - I admit)

class countcalls(object):
    "Decorator that keeps track of the number of times a function is called."

    __instances = {}

    def __init__(self, f):
        self.__f = f
        self.__numcalls = 0
        countcalls.__instances[f] = self

    def __call__(self, *args, **kwargs):
        self.__numcalls += 1
        return self.__f(*args, **kwargs)

    def count(self):
        "Return the number of times the function f was called."
        return countcalls.__instances[self.__f].__numcalls

    @staticmethod
    def counts():
        "Return a dict of {function: # of calls} for all registered functions."
        return dict([(f.__name__, countcalls.__instances[f].__numcalls) for f in countcalls.__instances])

def wrapper(x):
    past=[]
    @countcalls
    def inner(y):
        print x 
        print inner.count()
        past.append(y)
        print past

    return inner

def main():
    foo = wrapper("some constant")

    foo(5)
    foo("something")


if __name__ == '__main__':
    main()

output:

some constant
1
[5]
some constant
2
[5, 'something']  

Now I want to change the memoize function to a neat pythonic decorator. Here is what I came out with so far:

class countcalls(object):
    "Decorator that keeps track of the number of times a function is called."

    __instances = {}

    def __init__(self, f):
        self.__f = f
        self.__numcalls = 0
        countcalls.__instances[f] = self

    def __call__(self, *args, **kwargs):
        self.__numcalls += 1
        return self.__f(*args, **kwargs)

    def count(self):
        "Return the number of times the function f was called."
        return countcalls.__instances[self.__f].__numcalls

    @staticmethod
    def counts():
        "Return a dict of {function: # of calls} for all registered functions."
        return dict([(f.__name__, countcalls.__instances[f].__numcalls) for f in countcalls.__instances])


class memoize(object):
    past=[]

    def __init__(self, f):
        past = []
        self.__f = f

    def __call__(self, *args, **kwargs):
        self.past.append(*args)

        return self.__f(*args, **kwargs)

    def showPast(self):
        print self.past


@memoize
@countcalls
def dosth(url):
    print dosth._memoize__f.count()  ## <-- this is so UGLY
    dosth.showPast()

def main():
    dosth("one")
    dosth("two")

if __name__ == '__main__':
    main()

And here is the output:

1
['one']
2
['one', 'two']

How to get rid of the "ugly" line ( print dosth._memoize__f.count() ) ? In other words - how can I directly call the methods of the stacked decorators? (without adding a method to the decorators to call the methods of other decorators - that is not my point)

by Chris at October 21, 2014 08:41 AM

Service that returns data from an asynchronous method

I am using Sails' ORM (Waterline). I have written a geturl service that should return the url of several models/actions in my app. I am currently calling this service inside my templates.

(As I am alone to develop this, don't hesitate to warn me if this design pattern is wrong)

Now it occurs that Waterline's .find() method is asynchronous (as it should). I always use callbacks to do things when inserting or fetching things in database.

Now I have seen everywhere that I cannot return any data from asynchronous methods. As a consequence I am puzzled because I want to create this [damned] service to centralize the URL management.

Here is my current code:

module.exports = {
    variete: function(id_objet) {
        var string = '/default_url';
        return onvariete(id_objet, function (err, url) {
          if (err) {
              sails.log.error('Error : ', err);
          } else {
              return url;
          }
        });
    }
};


function onvariete(id_objet, next) {
  var url = '/';
  return Variete.findOne({id:id_objet}).exec(function (err, v) {
    sails.log.info('URL Variety : '+ v.nom + ' / ' +id_objet + ' / ' + v.slug);
    if (err) {
      sails.log.error('Error : ' + v.nom + ' / ' + err);
      // Do nothing.
      return next(new Error('Variete error'), undefined);
    } else if (!v) {
      return next(new Error('Variete not found'), undefined);
    } else if (!v.slug) {
      // variete doesn't have a slug field
      // we redirect to /v/:id
      url += 'v/' + v.id;
      return next (null, url);
    } else {
      // Ok variete has got a slug field
      sails.log.info('GOT A SLUG! ' + v.slug);
      url += 'variete/' + v.slug;
      return next (null, url);
    }
  });
}

I made a static object that embeds my geturl service, and then inside a Jade template:

a(href="#{s.geturl.variete(ann.variete.id)}" title="#{ann.variete.name}") #{ann.variete.name}

And I can get something like:

<a title="Tomate Coeur de Boeuf" href="undefined">Tomate Coeur de Boeuf</a>

Thank you by advance.

by Le Barde at October 21, 2014 08:33 AM

Can someone explain Clojure Transducers to me in Simple terms?

I have tried reading up on this but I still don't understand the value of them or what they replace. And do they make my code shorter, more understandable or what?

Update

Alot of people posted answers, but it would be nice to see examples of with and without transducers for something very simple, which even an idiot like me can understand. Unless of course transducers need a certain high level of understanding, in which case I will never understand them :(

by Zubair at October 21, 2014 08:23 AM

How to insert anti forgery token with Clojure Enlive

I try to insert anti forgery token using ring.util.anti-forgery into html form:

(html/defsnippet post-edit-form "templates/blog.html" [:.post-edit]
   []
   [:form] (html/after (html/html-content (anti-forgery-field))))

Get exception:

java.lang.IllegalArgumentException
Don't know how to create ISeq from: net.cgrand.enlive_html$html_content$fn__5571
RT.java:505 clojure.lang.RT.seqFrom
RT.java:486 clojure.lang.RT.seq
core.clj:133    clojure.core/seq
enlive_html.clj:227 net.cgrand.enlive-html/flatten-nodes-coll[fn]
enlive_html.clj:232 net.cgrand.enlive-html/flatten-nodes-coll[fn]
LazySeq.java:40 clojure.lang.LazySeq.sval
...

Also tried this:

(html/defsnippet post-edit-form "templates/blog.html" [:.post-edit]
  []
  [:form] (html/after (html/html [:input {:id "__anti-forgery-token"
                                          :name "__anti-forgery-token"
                                          :type "hidden"
                                          :value *anti-forgery-token*}])))

does not work :(

(anti-forgery-field) produces just html string with one 'input'. But i cant insert it into form.

by uNmAnNeR at October 21, 2014 08:16 AM

Lobsters

So, what are transducers, exactly?

This is one of the most succinct tweets ever. Tony Morris just says it like it is: “Control.Lens.Fold”.

Comments

by irrequietus at October 21, 2014 08:09 AM

StackOverflow

Unclear scala compiler behaviour when using shapeless to map over tuples

Assume the following setup:

val f = new (Int -> Int)(_ + 1)
object g extends (Int -> Int)(_ + 1)

Then we have

(1,2) map g 
(2,3)

But

(1,2) map f

fails to compile with error

error: could not find implicit value for parameter mapper: shapeless.ops.tuple.Mapper[(Int, Int),f.type]
          (1,2) map f

but (f == g) in terms of ->[A,B], so what am I missing?

by user1512719 at October 21, 2014 08:05 AM

Planet Clojure

Monads for Software Engineers

Gottfried_Wilhelm_von_Leibniz
The term monad could sound weird to a software engineer because it comes from category theory. There’s plenty of related maths material around but let’s just forget about it and about the choice of that word (BTW am I the only one being triggered Leibniz memories from high-school philosophy classes?): I’m interested in a software engineering perspective on the topic and, since I couldn’t find an introductory one that was clear enough for me, I decided to take a dive in and build my own.

What is a monad?

Very, very shortly: a data structure implements a monad interface iff it defines some “lifted-up” functions sequencing operator (think of a functional “;“-like sequencing).
So a monad itself is basically an interface meant to chain calculations that maintains some kind of additional “out-of-band” stuff, while still carrying this “context” along the way (here’s why I talk about “lifted-up” sequencing). For example they can carry state, IO conditions, error conditions, data structure markers, tags… Whatever.
Every interface having (at least) two specific methods (I’ll call them buildMonad and passMonadThrough) and satisfying certain “carry on” rules (we’ll see them in a moment) can be called a “monad”.

What are they useful for?

Monads are useful because in some circumstances they simplify and clarify code. They let you focus on the main functional transformation flow without being distracted by “contextual” information. Think of a “nil-checking” monadic sequencing operator that will nil-check results for you at every function application step and will short-circuit the transformational pipeline if at some point a nil value is produced.
An implementation of monads for Clojure is available as algo.monads, while a tutorial and some neat examples in Clojure have been written by Konrad Hinsen.

Under a monad’s cover

Let’s have a look at the interface definition and then at the “monadic behaviour” rules: I’ll use a lisp-style notation, except for the infix passMonadThrough and the x -> body style for anonymous functions with a single x parameter. C1 === C2 means that the computation C1 behaves the same as computation C2.
  • (buildMonad v): adds some context to a value v, building a new monad m
  • (m passMonadThrough f): it takes a value already enriched with a context (i.e. the monad m), a valid transformation of a “naked” value into something else with the same kind of context (i.e. the function f) and hooks the two in some implementation-specific way, but still observing the following “carry on” rules:
    1. ((buildMonad v) passMonadThrough f) === (f v): passing a value just enriched with a context (so, a newly built monad) through a context-injecting transformation will behave like applying the function to the simple-value part. This is the first monad constructor “neutrality” rule, called “left identity” (buildMonad is on the left of passMonadThrough).
    2. (m passMonadThrough buildMonad) === m: a context-enriched value (i.e., the monad m) passed through buildMonad should stay unchanged both in value and context. This is the second monad constructor “neutrality” rule, called “right identity” (buildMonad is on the right of passMonadThrough).
    3. ((m passMonadThrough f) passMonadThrough g) === (m passMonadThrough (x -> ((f x) passMonadThrough g)): a context-enriched (i.e. monad m) value passed through some context-injecting transformation (i.e. function f), and then passed through some other context-injecting transformation (i.e. function g) will yield the same result as passing the initial context-enriched value m to the passMonadThrough-mediated composition of f and g (and NOW I realize it’s much more easily understood by reading it than by explaining it…). This is the rule implementing “chaining” and it is called “associativity” (we’ll see in a moment why).
Basically the “chaining” rule mandates that the passMonadThrough method smash an input monad into any monad-building function (i.e. context-injecting transformation) in such a way that the result is “flattened” and doesn’t change the structure of the input monad’s “context”. This ensures that it can “pile up” any number of monad-building calls in a “last applied, last written” order.
In addition, passMonadThrough must do so in an associative fashion, so that there’s no need to specify precedence about subsequent calls: this means they can just be written in a straightforward sequence.
The identity-related laws, finally, ensure that the monad constructor buildMonad is neutral w.r.t. to passMonadThrough both when it’s on the left side and when it’s on the right side, which completes the “smooth” sequential behaviour.
If the above starts feeling like re-building a sequential, imperative-style control flow from functional programming and these “monad” interfaces, you are on the right track.

Example: lists as monads

Here’s a nice example: the list data structure can be turned into a monad by providing adequate monadic operations on it.
In the following, l is a list, vs are values and both f and g build a list of some kind from a value. I’ll shorten buildMonad as build and passMonadThrough as through.
Let’s define:
(build v) = [v]
(l through f) = (concat (map f l))
Let’s now verify the 3 rules (being l = [v] and (f v) = l').

1) Left identity

[v] through f = (concat (map f [v])) = (concat [l']) = l' = (f v)

2) Right identity

[v] through [] = (concat (map [] [v])) = (concat [v]) = [v]

3) Associativity

(l through f) through g
= (concat (map f l)) through g
= (concat (map g (concat (map f l))))
= (concat (map g (concat [[v1'] ... [vn']])))
= (concat (map g [v1' ... vn']))
= (concat [[v1''] ... [vn'']])
= [v1'' ... vn'']
l through (y -> ((f y) through g))
= (concat (map (y -> ((f y) through g)) l))
= (concat (map (y -> (concat (map g (f y)))) l))
= (concat [((y -> (concat (map g (f y)))) v1)
  ... ((y -> (concat (map g (f y)))) vn)])
= (concat [(concat (map g (f v1))) ... (concat (map g (f vn)))])
= (concat [(concat (map g [v1'])) ... (concat (map g [vn']))])
= (concat [(concat [v1'']) ... (concat [vn''])]
= (concat [[v1''] ... [vn'']]) = [v1'' ... vn'']

Few more notes

Some languages have specific syntax to use monadic interfaces (e.g. Haskell) while others can build monadic DSLs quite easily (e.g. Clojure). Some languages with monad implementations have rigorous compile-time type-systems (e.g. again Haskell) and others are dynamic (Clojure).
Please note that the monadic behaviour rules are specified in terms of runtime behaviour, so typically the developer has to ensure that they hold with little or no compiler support.
I hope the above will help people (like me) looking for a less mathematical and a more computational perspective on monads!

by circlespainter at October 21, 2014 08:00 AM

Planet FreeBSD

StackOverflow

Trying to gain confidence in the benefits of TDD

I just bought The Art of Unit Testing from Amazon. I'm pretty serious about understanding TDD, so rest assured that this is a genuine question.

But I feel like I'm constantly on the verge of finding justification to give up on it.

I'm going to play devil's advocate here and try to shoot down the purported benefits of TDD in hopes that someone can prove me wrong and help me be more confident in its virtues. I think I'm missing something, but I can't figure out what.

1. TDD to reduce bugs

This often-cited blog post says that unit tests are design tools and not for catching bugs:

In my experience, unit tests are not an effective way to find bugs or detect regressions.

...

TDD is a robust way of designing software components (“units”) interactively so that their behaviour is specified through unit tests. That’s all!

Makes sense. The edge cases are still always going to be there, and you're only going to find the superficial bugs -- which are the ones that you'll find as soon as you run your app anyway. You still need to do proper integration testing after you're done building a good chunk of your software.

Fair enough, reducing bugs isn't the only thing TDD is supposed to help with.

2. TDD as a design paradigm

This is probably the big one. TDD is a design paradigm that helps you (or forces you) to make your code more composable.

But composability is a multiply realizable quality; functional programming style, for instance, makes code quite composable as well. Of course, it's difficult to write a large-scale application entirely in functional style, but there are certain compromise patterns that you can follow to maintain composability.

If you start with a highly modular functional design, and then carefully add state and IO to your code as necessary, you'll end up with the same patterns that TDD encourages.

For instance, for executing business logic on a database, the IO code could be isolated in a function that does the "monadic" tasks of accessing the database and passing it in as an argument to the function responsible for the business logic. That would be the functional way to do it.

Of course, this is a little clunky, so instead, we could throw a subset of the database IO code into a class and give that to an object containing the relevant business logic. It's the exact same thing, an adaptation of the functional way of doing things, and it's referred to as the repository pattern.

I know this is probably going to earn me a pretty bad flogging, but often times, I can't help but feel like TDD just makes up for some of the bad habits that OOP can encourage -- ones that can be avoided with a little bit of inspiration from functional style.

3. TDD as documentation

TDD is said to serve as documentation, but it only serves as documentation for peers; the consumer still requires text documentation.

Of course, a TDD method could serve as the basis for sample code, but tests generally contain some degree of mocks that shouldn't be in the sample code, and are usually pretty contrived so that they can be evaluated for equality against the expected result.

A good unit test will describe in its method signature the exact behavior that's being verified, and the test will verify no more and no less than that behavior.

So, I'd say, your time might be better spent polishing your documentation. Heck, why not do just the documentation first thoroughly, and call it Documentation-Driven Design?

4. TDD for regression testing

It's mentioned in that post above that TDD isn't too useful for detecting regressions. That's, of course, because the non-obvious edge cases are the ones that always mess up when you change some code.

What might also be to note on that topic is that chances are good that most of your code is going to remain the same for a pretty long time. So, wouldn't it make more sense to write unit tests on an as-needed basis, whenever code is changed, keeping the old code and comparing its results to the new function's?

by Rei Miyasaka at October 21, 2014 07:38 AM

Lobsters

CompsciOverflow

Where would someone find amortized analysis more useful than average analysis and the opposite?

I'm trying to understand the difference between these two. They both look at what happens on average, however amortized analysis is actually dealing with exactly the amount of operations you are doing in each step, while average analysis produces an expected cost based on probabilities.

For them to both exist as separate ways of doing analysis of algorithms, then one must be preferred over the other one in some cases, however I can't figure a case to actually show that.

by jsguy at October 21, 2014 07:33 AM

Wondermark

StackOverflow

play framework 100% cpu

I have a script that starts and stops my play application from cron. The trouble is the app is slow and always eating 100% CPU. I think its because of the way its started as I dont seem to observer it when I start it manually. By manually I mean typing start then hitting ctrl-D when prompted as directed. As this is a computer I have started thinking that some operations could maybe be automated and not require my input and so I have made a script that trys to start it but obviously i wont be there for the ctrl-D part... I have started it as:

nohup /home/play/play-2.1.3/play "start -Dhttp.port=80" &

which works but its always eating 100% cpu and slow

can it be scripted or will I always be a slave to the machine and hqve to start it as described in the doc with me physically at the terminal

thanks

by mbrambley at October 21, 2014 07:22 AM

Finds all the documents were the an array field contains a document that match some conditions

I've a MongoDB collection that store all the user data.

A document of my collection has the following JSON form:

{
    "_id" : ObjectId("542e67e07f724fc2af28ba75"),
    "id" : "",
    "email" : "luigi@gmail.com",
    "tags" : [
        {
            "tag" : "Paper Goods:Liners - Baking Cups",
            "weight" : 2,
            "lastInsert" : 1412327492874
        },
        {
            "tag" : "Vegetable:Carrots - Jumbo",
            "weight" : 4,
            "lastInsert" : 1412597883569
        },
        {
            "tag" : "Paper Goods:Lialberto- Baking Cups",
            "weight" : 1,
            "lastInsert" : 1412327548205
        },
        {
            "tag" : "Fish:Swordfish Loin Portions",
            "weight" : 3,
            "lastInsert" : 1412597939124
        },
        {
            "tag" : "Vegetable:Carrots - alberto@gmail.com",
            "weight" : 2,
            "lastInsert" : 1412597939124
        }
    ]
}

The tag field is in the form "category:name product" and the "tags" field contains all the product bought by an user.

I'm writing a Scala application, and I'm using the reactivemongo driver. Now I'm writing a method that given a category and a product search all the user that have bought at least a product of the given category, but had not already bought no one product equals to the given.

My code now is like the following:

def findUsers(input: FindSuggestion): Future[Option[List[User]]] = {
      val category = input.category //a string
      val product = input.product  //a string, in the form category:productName
      val query = Json.obj(//create the query object)
      Users.find(query).toList.flatMap(users =>
        if(users.size > 0)
          Future{Some(users)}
        else
          Future{None}
          )
    }

To be more specific I search all the document where the tags field contain a document where the tag field starts with category, but the tags field doesn't contain any document where tag == product.

How can i make that in mongodb??

by alberto adami at October 21, 2014 07:20 AM

Scala + Ebean: orderBy not working?

This is a simple situation but for some reason orderBy for me hasn't worked.

I have a very simple model class;

case class Sale(price: Int, name: String) {
  @Id
  var id: Long = 0
    @Formats.DateTime(pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSZ")
    var saleDate: DateTime = new DateTime()
}

and the companion object;

object Sale {
    def find = new Finder[String, Sale](classOf[String], classOf[Sale])
}

Then I'm trying to fetch the list of all sale entries and order them using the saleDate value;

Sale.find
  .where
  ... // some conditions
  .orderBy("saleDate desc")
  .findMap

It seems pretty simple and straightforward to me, but it doesn't seem to work. Does anyone know what might the reason be?

by Ashesh at October 21, 2014 07:12 AM

CompsciOverflow

What problem cannot be solved by a short program?

BACKGROUND:

Recently I tried to solve a certain difficult problem that gets as input an array of $n$ numbers. For $n=3$, the only solution I could find was to have a different treatment for each of the $n!=6$ orderings of the 3 numbers. I.e., there is one solution for the case $A>B>C$, another solution for $A>C>B$, etc. (the case $A>C=B$ can be solved by any one of these two solutions).

Thinking of the case $n=4$, it seems that the only way is, again, to consider all $n!=24$ different orderings and develop a different solution for each case. While the solution in each particular case is fast, the program itself would be very large. So the runtime complexity of the problem is small, but the "development time" complexity or the "program size" complexity is very large.

This prompted me to try and prove that my problem cannot be solved by a short program. So I looked for references for similar proofs.

The first concept that I found is Kolmogorov complexity; however, the information I found about this topic is very general and includes mostly existence results.

QUESTION:

Can you describe a specific, real-life problem $P$, such that any program solving $P$ on an input array of size $n$ must have a size of at least $\Omega(f(n))$, where $f(n)$ is some increasing function of $n$?

Since the answer obviously depends on the selection of programming language, assume that we program in Java, or in a Turing machine - whichever is more comfortable for you.

Every undecidable problem trivially satisfies this requirement because it has no solution at all. So I am looking for a decidable language.

by Erel Segal Halevi at October 21, 2014 07:03 AM

StackOverflow

Using scala object inside java?

In my java code, I am calling a method, from a class which is defined in Scala, and I want to use one of its methods in java. Here is how I call it and it works fine.

Seq<SomeObjectType> variableName = ScalaClass.MethodInTheScalaClass(); 

I can call this function in java in this form, but since I am calling this method from a compiled package, I can't see what it going on (and therefore I can't change it).

The problem now is that, I don't know how to iterate over the "variableName" in java (since Seq is a scala type).

How can I iterate over variableName or convert it to a Java object (e.g. List)?

by Daniel at October 21, 2014 06:50 AM

/r/freebsd

StackOverflow

How to split an input sequence according to the input number given

I'm writing a clojure function like:

(defn area [n locs]
  (let [a1 (first locs)]
    (vrp (rest locs))))

I basically want to input like: (area 3 '([1 2] [3 5] [3 1] [4 2])) But when I do that it gives me an error saying Wrong number of args (1) passed. But I'm passing two arguments.

What I actually want to do with this function is that whatever value of n is inputted (say 3 is inputted), then a1 should store [1 2], a2 should store [3 5], a3 should store ([3 1] [4 2]). What should I add in the function to get that?

by Martha Pears at October 21, 2014 06:30 AM

using iterate with a custom function to generate a seq

First I am total begginner in Lisp and clojure. Trying to do all the examples in some text books but I find myself with my head spinning most of the time. Particularly, at the parenthesis everywhere.

Use the iterate function myCustomFunc function to produce a sequence of successive calls to myCustomFunc.

 (defn myCustomFunc [[a b]]

   ;complex logic dumb down for this example
       (vec (+ a b))
      )
    (take 31 (iterate (fn [[a col]][a  inc(col)][0 1])))

If it isnt clear my goal is to not change variable a because is a constant and increment col. I could be totally wrong with my code but one key requirement is that i must use iterate to call a function and get a seq.

by hidden at October 21, 2014 06:29 AM

/r/clojure

What's your preferred way of structuring templates/directories for visitors versus signed in users?

An example would be how a top nav bar changes when a user is signed in eg on github.

There typically are 3 ways to do this (not all of them good):

  1. Make templates/views more complex by adding conditional code to show content based on the existence of a session and a possible user role type eg standard user or admin - or permission type in the case of RBAC.

  2. Use different directories for different user/permission types (once a user has logged in - session created), which often includes duplicating some of the common templates/views that don't change from when a user isn't signed in.

  3. A common template/views directory for core site/app UI components that don't change often - and either 1. or 2. for UI stuff that does change after a user session is created.

I could do this as I've done in the past, but as my 'dev world' is getting dramatically changed (for the good - simplified) learning Clojure to make my first web app, I'm wondering if there's a 'Clojure-like' ("idiomatic") way that Clojure makes possible that I haven't yet learned?

submitted by clojure_nub
[link] [1 comment]

October 21, 2014 06:25 AM

StackOverflow

How to parse Cursor[JsObject] in scala reactive mongo

I have an API like this in play2.3 - reactive mongo-

 def addEndUser = Action.async(parse.json) { request =>
        val cursor: Cursor[JsObject] = collectionEndUser.find(Json.obj("mobileNumber" -> "9686563240","businessUserId" ->"1")).
        sort(Json.obj("createDate" -> -1)).cursor[JsObject]
        val futureEndUserList: Future[List[JsObject]] = cursor.collect[List]()
        futureEndUserList.map { user =>
            val x:JsObject = obj(Map("endUsers" -> toJson(user) ))
                println(x)
    }
    request.body.validate[User].map { user =>

        val jsonData = Json.obj(
            "businessUserId" ->user.businessUserId,
            "userId" -> user.userId,
            "registrantId" ->user.registrantId,
            "memberId" -> "",
            "name"   -> user.name,
            "currentPoints" -> user.currentPoints,
            "email" -> user.email,
            "mobileNumber" -> user.mobileNumber,
            "mobileCountryCode" ->user.mobileCountryCode,
            "createDate" -> (new java.sql.Timestamp(new Date().getTime)).toString,
            "updateDate" -> (new java.sql.Timestamp(new Date().getTime)).toString,
            "purchasedAmtForRedemption"->user.purchasedAmtForRedemption
        )

        collectionEndUser.insert(jsonData).map { lastError =>
                 Logger.debug(s"Successfully inserted with LastError: $lastError")
                 Created
        }
    }.getOrElse(Future.successful(BadRequest("invalid json")))
}

def findEndUserByUserId(userId: String) = Action.async {
    val cursor: Cursor[JsObject] = collectionEndUser.find(Json.obj("userId" -> userId)).
    sort(Json.obj("createDate" -> -1)).cursor[JsObject]

    val futureEndUserList: Future[List[JsObject]] = cursor.collect[List]()

    //val futureEndUserJsonArray: Future[JsArray] = futureEndUserList.map { endUser =>
        //Json.arr(endUser)
    //}

    futureEndUserList.map { user =>
        Ok(toJson(Map("endUsers" -> toJson(user) )))
    }
}

This API is called as POST method to store those fields in DB. But before adding in the DB, I want to get a value from a collection and use it in one of the fields. All though println(x) is printing the object like this {"endUsers":[{"_id":{"$oid":"543f6912903ec10f48673188"},"businessUserId":"1","createDate":"2014-10-16 12:13:30.771","currentPoints":16.0,"email":"ruthvickms@gmail.com","mobileCountryCode":"+91","mobileNumber":"9686563240","name":"Ruthvick","purchasedAmtForRedemption":50.0,"updateDate":"2014-10-17 20:23:40.725","userId":"5"},{"_id":{"$oid":"543f68c0903ec10f48673187"},"businessUserId":"1","userId":"4","name":"Ruthvick","currentPoints":"0","email":"ruthvickms@gmail.com","mobileNumber":"9686563240","mobileCountryCode":"+91","createDate":"2014-10-16 12:12:08.692","updateDate":"2014-10-16 12:12:08.692","purchasedAmtForRedemption":"0"},{"_id":{"$oid":"543f689e903ec10f48673186"},"businessUserId":"1","userId":"3","name":"Ruthvick","currentPoints":"0","email":"ruthvickms@gmail.com","mobileNumber":"9686563240","mobileCountryCode":"+91","createDate":"2014-10-16 12:11:34.079","updateDate":"2014-10-16 12:11:34.079","purchasedAmtForRedemption":"0"},{"_id":{"$oid":"543f63ef903ec10f48673185"},"businessUserId":"1","userId":"2","name":"Ruthvick","currentPoints":"0","email":"ruthvickms@gmail.com","mobileNumber":"9686563240","mobileCountryCode":"+91","createDate":"2014-10-16 11:51:35.394","updateDate":"2014-10-16 11:51:35.394","purchasedAmtForRedemption":"0"}]}, parsing like this

x.endUsers[0].name is throwing error like

 identifier expected but integer literal found.
    println(x.endUsers[0].name)

Please help me to parse this..I'm a beginner in play framework.

Thanks

by user3777846 at October 21, 2014 05:57 AM

How to combine Play JSON objects with parser-combinator JSONObjects?

import play.api.libs.json._
import scala.util.parsing.json.{JSON, JSONArray, JSONObject}

I have following json array-

 val groupNameList = Json.arr(
    Json.obj(
      "groupName" -> "All",
      "maxSeverity" -> allGroupSeverityCount,
      "hostCount" -> (windowsCount + linuxCount + esxCount + networkCount + storageCount + awsLinuxCount + awsWindowsCount)),
    Json.obj(
      "groupName" -> "Private",
      "maxSeverity" -> privateGroupSeverityCount,
      "hostCount" -> (windowsCount + linuxCount + esxCount + networkCount + storageCount)),
    Json.obj(
      "groupName" -> "Public",
      "maxSeverity" -> publicGroupSeverityCount,
      "hostCount" -> (awsLinuxCount + awsWindowsCount))
   )

I want to append a following list of json objects to this array -

List({"groupName" : "group1", "maxSeverity" : 10, "hostCount" : 1, "members" : ["192.168.20.30", "192.168.20.31", "192.168.20.53", "192.168.20.50"]})

I want to merge list into array.

How do I append given list to json array using scala???

by user3322141 at October 21, 2014 05:52 AM

/r/clojure

How do I create database tables with Korma?

I'm trying to make a simple application in Clojure that uses a database. I'm fine using the H2 database engine -- it's an engine that runs entirely in Java, so you don't need to install the database separately. Great for my use.

Looking around, it seems like Korma is the way to go. It's a dsl for interacting with databases in Clojure. Yay!

But I'm having trouble actually using Korma. The examples show some basic things, but I am somewhat lost -- Korma has a concept called "entity", which seems to be the Clojure object that represents a SQL table. It has functions called create-entity and defentity, but calling both of them doesn't seem to create the table; inserts don't work. My entire core.clj is here:

(ns dbexplore.core (:require [korma.db :as db] [korma.core])) (def db-connection (db/h2 {:db "./resources/db/dbexplore.db"})) (db/defdb korma-db db-connection) (korma.core/defentity users) ;;how do I say the properties a user has? I don't want foreign keys here, so it's not a has-one or has-many (defn add-user [first last] (korma.core/insert users (korma.core/values {:first first :last last}))) (defn -main [] (println "This is the main function. Creating the table.") (korma.core/create-entity users) (println "inserting john") (let [id (add-user "john" "doe")] (println "John's id is" id))) 

When I lein run this, I get an exception:

Caused by: org.h2.jdbc.JdbcSQLException: Table "users" not found; SQL statement: INSERT INTO "users" ("first", "last") VALUES (?, ?) [42102-182] 

Somehow I have to create a table. It seems the Korma documentation assumes your database is already set up with the tables you want; it doesn't explain how to create tables.

So my main question is: how do I create tables in Korma? I'd d rather not have to use the exec-raw function, since the whole point of DSLs are to avoid having to use the underlying language.

If one exists, I'd appreciate a script that starts with a nonexistent database, sets one up, inserts data, deletes data, and queries it, but I can probably figure this out if I can create tables.

Alternately, is there a better library for SQL interaction? I'm certainly not wedded to Korma if there's a more appropriate library.

submitted by zck
[link] [8 comments]

October 21, 2014 05:49 AM

CompsciOverflow

Polynomial time optimisation algorithm for a poly-time computable function with bounded number of maxima?

Suppose we have a polynomial time algorithm for computing a function (we think of as existing on rational numbers between $0$ and $1$ of limited binary length n). We know that this function is made up of $m$ strictly monotone functions or equivalently that it has up to $m-1$ maxima/minima. Can we find these maxima in time that is polynomial in $m$ and $n$?

by Daniels Pictures at October 21, 2014 05:37 AM

QuantOverflow

Volatility skew and how to capture it?

We see in the market that a implied volatility surface is not flat. Based on this observation different models were developed to capture the structure, e.g. CEV / SABR.

A measure often used for the skew is a risk reversal, i.e.

$$\sigma_{25,c}-\sigma_{25,p}$$

and butterfly

$$\frac{\sigma_{25,c}+\sigma_{25,p}}{2}-\sigma_{ATM}$$

where $\sigma_{25,c}$ is the implied volatility of $25$ delta call.

Looking at the skew, you are interested in the slope an curvature. The mathematical objects would be for the slope of a function $f$:

$$\frac{f(x+h)-f(x)}{h}$$

and for the curvature

$$\frac{f(x+h)-2f(x)+f(x-h)}{h^2}$$

So why are the above measure (RR and BF) not constructed like this? Should they be seen as an approximation?

Moreover, why is it common to just look at a specific RR / BF, 25 for example. Wouldn't it be more reasonable to calculate these measure for every strike (in delta measured) on the grid? Obviously the slope and curvature can change as for different deltas.

by user8 at October 21, 2014 05:26 AM

CompsciOverflow

How to properly solve this Hidden Markov Model problem?

I got a an exercise problem which should be seen as a HMM scenario and argument some statements. However I'm quite confused about how to properly solve and argument my solutions.

Problem tells:

Imagine you want to determine the annual temperature centuries of years ago, when of course there wasn't any thermometer or records. So, nature as an evidence is a worth to try resource, we may achieve it by watching at tree's inside rings. There's reliable evidence suggesting that there's a relation among the rings inside trees and temperature. There will be 2 different temperature states, WARM (W) and COLD (C) and three discretized tree rings sizes: SMALL (S), MEDIUM (M) and LARGE (L). Some researchers have provided two matrixes:

$\begin{bmatrix}.7 & .3\\.4 & .6\end{bmatrix}$

As transition matrix, so the probability of remaining in COLD state if COLD is present is $.6$ and the probability of passing from COLD to WARM is $.7$.

Also, a second matrix with the relation among the ring's size and the temperature over the year:

$\begin{bmatrix}.1 & .4 & .5\\.7 & .2 & .1\end{bmatrix}$

So, problem asks what I should do to calculate the chance of a sequence (for example):

SSSMMLLL

To happen. I considered multiplying the 4 distinct cases and so generate a Markov matrix with transitions among trees ring sizes. However I never got a matrix whose rows sum 1 as it should.

How could I solve this?

by diegoaguilar at October 21, 2014 05:24 AM

/r/emacs

StackOverflow

clojure assignment of variables in a loop function that returns a vector

My question is ... what would be a good way to update the values of currentRow nextRow bitPosition every time (recur [currentRow nextRow bitPosition])) executes. Right now i am struggling with the fact that I can't just do something easy like this in clojure. Instead i am trapped in this world of pain where I cant even figure out how to set a variable to a new value in a loop.

//I wish i could just do this

currentRow =(get myVector  0) 

//here my code

 (loop [myVector []]
        (let [
            rule ruleParam
            currentRow currentRowParam
            nextRow 2r0
            bitPosition 2r0
           ]

    (when (bit-test rule (bit-and currentRow 2r111)) 
       (
           (bit-shift-right currentRow 1)
           (bit-set nextRow 1)
           (inc bitPosition)
      ))
    (when (= false  (bit-test rule (bit-and currentRow 2r111)) )
        (bit-shift-right currentRow 1)
         (bit-set nextRow 1)
         (inc bitPosition)
      )
    (recur [currentRow nextRow bitPosition]))

     ))

Solution to my question. Thanks for all your guidance.

(defn firstFunc [[rule currentRowParam]]
  (let  [currentRowLocal (bit-shift-left currentRowParam 1) nextRowLocal 2r0 bitPositionLocal 0]

  (loop [currentRow currentRowLocal nextRow nextRowLocal bitPosition bitPositionLocal]

  (if (< bitPosition 31)
  (if (bit-test rule (bit-and currentRow 2r111))
   (recur 
      (bit-shift-right currentRow 1) 
      (bit-set nextRow bitPosition)
      (inc bitPosition) 
    );end recur
  (recur 
      (bit-shift-right currentRow 1) 
      nextRow
      (inc bitPosition) 
    );end recur
  )
  ;else
   nextRow);end if  (< bitPosition 31)
  );end loop
  );end let
);end defn firstFunc


(firstFunc2 [2r1110 2r11])

by hidden at October 21, 2014 04:46 AM

Lobsters

CompsciOverflow

Proof of the base case of Big Theta using induction [duplicate]

This question is an exact duplicate of:

Here is a recursive definition for the runtime of some unspecified function. a and c are positive constants.

$T(n)=a$, if $n=2$

$T(n)=2T(n/2)+cn$ if $n>2$

Use induction to prove that $T(n)=\theta(n\log(n))$

How should I do my base case? It is not a number, so how do I find my n0?

by Carol Doner at October 21, 2014 04:21 AM

Lobsters

CompsciOverflow

Estimating the $\beta$th moment of a uniform random variable

Let $n$ be a positive integer, $\beta > 1$, and let $X$ be a random variable uniformly distributed over $\{0, \ldots , n -1\}$. Show that $\mathbb{E}[X^\beta] \leq n^\beta / (\beta + 1)$.

I don't know how to get started with this. Can someone help me out?

by Eric at October 21, 2014 03:57 AM

StackOverflow

Ansible playbook group_var being overriden by a role var

For an existing project, I am replacing a bash provision script with ansible -- through Vagrant first, and then rolling it out for staging/prod servers after the kinks are worked out.

The problem...

According to the ansible docs on variable precedence, group_vars should override role vars, but I'm seeing the opposite happen.

The relevant files...

Following is an excerpt from my Vagrantfile (in the project root):

config.vm.provision "ansible" do |ansible|
  ansible.playbook = "app/config/provision/provision.yml"
end

I am pointing it to a playbook a few subdirectories down, as I'm working in an existing codebase with its own practices, and can't leave the ansible stuff cluttering up the root. The playbook in question:

# app/config/provision/provision.yml
---
- hosts: all
  gather_facts: yes
  sudo: true

  roles:
    - apache
    - php

  post_tasks:
    - debug: var=vagrant_ansible_test_loading_vars
    - debug: var=apache_listen_ports

Note the debug statements for two vars, both of which are defined in a group_vars file alongside the playbook:

# app/config/provision/group_vars/all
---
vagrant_ansible_test_loading_vars: "lorem ipsum"

apache_listen_ports:
  - 80
  - 8080

The apache role I'm using defines defaults (which should have the LOWEST precedence):

# app/config/provision/roles/apache/defaults/main.yml
---
apache_listen_ports: [ 80, 8080 ]

That same role also defines vars (which should be SECOND lowest precedence):

# app/config/provision/roles/apache/vars/main.yml
---
apache_listen_ports: [ 80 ]

The (unexpected) result

And yet, upon vagrant up, I'm getting this:

TASK: [debug var=vagrant_ansible_test_loading_vars] *************************** 
ok: [default] => {
    "vagrant_ansible_test_loading_vars": "lorem ipsum"
}

TASK: [debug var=apache_listen_ports] ***************************************** 
ok: [default] => {
    "apache_listen_ports": [
        80
    ]
}

The first variable being defined and having its original value tells me that my group_vars file is being loaded. The second variable has been overridden from the group_vars value, with (apparently) the value from the role vars.

by EvanK at October 21, 2014 03:52 AM

is "lift" using an eta expansion in "Functional Programming in Scala"?

In section 4.3.2 of Functional Programming in Scala there's a definition of a function that I don't quite understand. I can see that it works, but I'm not sure why.

   def lift[A,B](f: A => B): Option[A] => Option[B] = _ map f 

In the above statement is the '_' an eta expansion? I can tell you that the ScalaIDE (eclipse plugin) tells me it's an Option[A]. So of course you can rewrite the above as:

   def lift2[A,B](f: A => B): Option[A] => Option[B] = { oa: Option[A] => oa map f }

But what I'm wondering is how the compiler knows that the _ is going to be an Option[A] in the first definition. Is it really as simple as "because the return type says we're defining a function that takes an Option[A] as it's argument"?

by Langley at October 21, 2014 03:49 AM

Planet Emacsen

DataTau

Lobsters

CompsciOverflow

4 vertex, edge-weighted graph for which every shortest path tree is not a minimum spanning tree?

For an assignment, I was able to prove that for every edge-weighted graph G, every shortest path tree and every minimum spanning tree on G have at least one common edge.

One of the hints my professor gave, which I ended up not needing, was to try and make a 4 vertex graph for which every shortest path tree is not a minimum spanning tree.

But no matter how many variations I sketch out, and tracing all shortest path trees from different source nodes, I seem to get one shortest path tree that is a minimum spanning tree.

Thanks for the advice!

by MMP at October 21, 2014 02:22 AM

StackOverflow

Transform a Collection of scalaz disjunctions into a single disjunction

Given the following method:

def foo(seq: Seq[Long]) : Seq[\/[String, Long]] = seq map { v =>
  for {
    bar <- returnsOptionLong1(v) \/> "first was None"
    baz <- returnsOptionLong2(bar) \/> "second was None"
  } yield baz  
}

I want to implement the following method:

def qux(initial: Seq[\/[String, Long]]) : \/[String, Seq[Long]] = {
  // ... Fill-in implementation here ...
}

In other words: how does one use scalaz to transform a sequence of disjunctions into a disjunction with the right-side being a sequence.

Note: If a cleaner implementation would involve making changes to foo as well (e.g. modifications involving changing map to flatMap), please include those as well.

by Ryan Delucchi at October 21, 2014 02:00 AM

/r/clojure

StackOverflow

Clojure: how to explicitly choose JVM in the environment with Leiningen/Lighttable

In my Windows 7 (64 bits) environment, I have quite a few JVM available:

C:\Program Files (x86)\Java\j2re1.4.2_12\bin\client\jvm.dll
C:\Program Files (x86)\Java\jre6\bin\client\jvm.dll
D:\programs\Java\jdk1.7.0_45\jre\bin\server\jvm.dll
D:\programs\Java\jre7\bin\server\jvm.dll

Currently, with Lighttable/Leiningen (I don't know which makes the choice, and how), it uses

C:\Program Files (x86)\Java\j2re1.4.2_12\bin\client\jvm.dll

But I really would like to try

D:\programs\Java\jdk1.7.0_45\jre\bin\server\jvm.dll

It's even more puzzling that when I type

java -version

I got the following:

D:\yushen>java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

It seems that's what I want to have inside Lighttable/Leinengen.

Could you show me how to make the explicit choice/configuration?

I tried Google, but couldn't find some leads.

Thanks a lot!

by Yu Shen at October 21, 2014 01:33 AM

arXiv Programming Languages

Type Targeted Testing. (arXiv:1410.5370v1 [cs.PL])

We present a new technique called type targeted testing, which translates precise refinement types into comprehensive test-suites. The key insight behind our approach is that through the lens of SMT solvers, refinement types can also be viewed as a high-level, declarative, test generation technique, wherein types are converted to SMT queries whose models can be decoded into concrete program inputs. Our approach enables the systematic and exhaustive testing of implementations from high-level declarative specifications, and furthermore, provides a gradual path from testing to full verification. We have implemented our approach as a Haskell testing tool called TARGET, and present an evaluation that shows how TARGET can be used to test a wide variety of properties and how it compares against state-of-the-art testing approaches.

by <a href="http://arxiv.org/find/cs/1/au:+Seidel_E/0/1/0/all/0/1">Eric L. Seidel</a>, <a href="http://arxiv.org/find/cs/1/au:+Vazou_N/0/1/0/all/0/1">Niki Vazou</a>, <a href="http://arxiv.org/find/cs/1/au:+Jhala_R/0/1/0/all/0/1">Ranjit Jhala</a> at October 21, 2014 01:30 AM

Tropical Spectral Theory of Tensors. (arXiv:1410.5361v1 [math.CO])

We introduce and study tropical eigenpairs of tensors, a generalization of the tropical spectral theory of matrices. We show the existence and uniqueness of an eigenvalue. We associate to a tensor a directed hypergraph and define a new type of cycle on a hypergraph, which we call an H-cycle. The eigenvalue of a tensor turns out to be equal to the minimal normalized weighted length of H-cycles of the associated hypergraph. We show that the eigenvalue can be computed efficiently via a linear program. Finally, we suggest possible directions of research.

by <a href="http://arxiv.org/find/math/1/au:+Tsukerman_E/0/1/0/all/0/1">Emmanuel Tsukerman</a> at October 21, 2014 01:30 AM

Optimized Disk Layouts for Adaptive Storage of Interaction Graphs. (arXiv:1410.5290v1 [cs.DB])

We are living in an ever more connected world, where data recording the interactions between people, software systems, and the physical world is becoming increasingly prevalent. This data often takes the form of a temporally evolving graph, where entities are the vertices and the interactions between them are the edges. We call such graphs interaction graphs. Various application domains, including telecommunications, transportation, and social media, depend on analytics performed on interaction graphs. The ability to efficiently support historical analysis over interaction graphs require effective solutions for the problem of data layout on disk. This paper presents an adaptive disk layout called the railway layout for optimizing disk block storage for interaction graphs. The key idea is to divide blocks into one or more sub-blocks, where each sub-block contains a subset of the attributes, but the entire graph structure is replicated within each sub-block. This improves query I/O, at the cost of increased storage overhead. We introduce optimal ILP formulations for partitioning disk blocks into sub-blocks with overlapping and non-overlapping attributes. Additionally, we present greedy heuristic approaches that can scale better compared to the ILP alternatives, yet achieve close to optimal query I/O. To demonstrate the benefits of the railway layout, we provide an extensive experimental study comparing our approach to a few baseline alternatives.

by <a href="http://arxiv.org/find/cs/1/au:+Soule_R/0/1/0/all/0/1">Robert Soul&#xe9;</a>, <a href="http://arxiv.org/find/cs/1/au:+Gedik_B/0/1/0/all/0/1">B&#xfc;gra Gedik</a> at October 21, 2014 01:30 AM

On Content-centric Wireless Delivery Networks. (arXiv:1410.5257v1 [cs.NI])

The flux of social media and the convenience of mobile connectivity has created a mobile data phenomenon that is expected to overwhelm the mobile cellular networks in the foreseeable future. Despite the advent of 4G/LTE, the growth rate of wireless data has far exceeded the capacity increase of the mobile networks. A fundamentally new design paradigm is required to tackle the ever-growing wireless data challenge.

In this article, we investigate the problem of massive content delivery over wireless networks and present a systematic view on content-centric network design and its underlying challenges. Towards this end, we first review some of the recent advancements in Information Centric Networking (ICN) which provides the basis on how media contents can be labeled, distributed, and placed across the networks. We then formulate the content delivery task into a content rate maximization problem over a share wireless channel, which, contrasting the conventional wisdom that attempts to increase the bit-rate of a unicast system, maximizes the content delivery capability with a fixed amount of wireless resources. This conceptually simple change enables us to exploit the "content diversity" and the "network diversity" by leveraging the abundant computation sources (through application-layer encoding, pushing and caching, etc.) within the existing wireless networks. A network architecture that enables wireless network crowdsourcing for content delivery is then described, followed by an exemplary campus wireless network that encompasses the above concepts.

by <a href="http://arxiv.org/find/cs/1/au:+Liu_H/0/1/0/all/0/1">Hui Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Chen_Z/0/1/0/all/0/1">Zhiyong Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Tian_X/0/1/0/all/0/1">Xiaohua Tian</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_X/0/1/0/all/0/1">Xinbing Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Tao_M/0/1/0/all/0/1">Meixia Tao</a> at October 21, 2014 01:30 AM

Performance Engineering of the Kernel Polynomial Method on Large-Scale CPU-GPU Systems. (arXiv:1410.5242v1 [cs.CE])

The Kernel Polynomial Method (KPM) is a well-established scheme in quantum physics and quantum chemistry to determine the eigenvalue density and spectral properties of large sparse matrices. In this work we demonstrate the high optimization potential and feasibility of peta-scale heterogeneous CPU-GPU implementations of the KPM. At the node level we show that it is possible to decouple the sparse matrix problem posed by KPM from main memory bandwidth both on CPU and GPU. To alleviate the effects of scattered data access we combine loosely coupled outer iterations with tightly coupled block sparse matrix multiple vector operations, which enables pure data streaming. All optimizations are guided by a performance analysis and modelling process that indicates how the computational bottlenecks change with each optimization step. Finally we use the optimized node-level KPM with a hybrid-parallel framework to perform large scale heterogeneous electronic structure calculations for novel topological materials on a petascale-class Cray XC30 system.

by <a href="http://arxiv.org/find/cs/1/au:+Kreutzer_M/0/1/0/all/0/1">Moritz Kreutzer</a>, <a href="http://arxiv.org/find/cs/1/au:+Hager_G/0/1/0/all/0/1">Georg Hager</a>, <a href="http://arxiv.org/find/cs/1/au:+Wellein_G/0/1/0/all/0/1">Gerhard Wellein</a>, <a href="http://arxiv.org/find/cs/1/au:+Pieper_A/0/1/0/all/0/1">Andreas Pieper</a>, <a href="http://arxiv.org/find/cs/1/au:+Alvermann_A/0/1/0/all/0/1">Andreas Alvermann</a>, <a href="http://arxiv.org/find/cs/1/au:+Fehske_H/0/1/0/all/0/1">Holger Fehske</a> at October 21, 2014 01:30 AM

Distributed Methods for High-dimensional and Large-scale Tensor Factorization. (arXiv:1410.5209v1 [cs.NA])

Given a high-dimensional and large-scale tensor, how can we decompose it into latent factors? Can we process it on commodity computers with limited memory? These questions are closely related to recommendation systems exploiting context information such as time and location. They require tensor factorization methods scalable with both the dimension and size of a tensor. In this paper, we propose two distributed tensor factorization methods, SALS and CDTF. Both methods are scalable with all aspects of data, and they show an interesting trade-off between convergence speed and memory requirements. SALS updates a subset of the columns of a factor matrix at a time, and CDTF, a special case of SALS, updates one column at a time. On our experiment, only our methods factorize a 5-dimensional tensor with 1B observable entries, 10M mode length, and 1K rank, while all other state-of-the-art methods fail. Moreover, our methods require several orders of magnitude less memory than the competitors. We implement our methods on MapReduce with two widely applicable optimization techniques: local disk caching and greedy row assignment.

by <a href="http://arxiv.org/find/cs/1/au:+Shin_K/0/1/0/all/0/1">Kijung Shin</a>, <a href="http://arxiv.org/find/cs/1/au:+Kang_U/0/1/0/all/0/1">U Kang</a> at October 21, 2014 01:30 AM

An Algebra of Reversible Computation. (arXiv:1410.5131v1 [cs.LO])

We design an axiomatization for reversible computation called reversible ACP (RACP). It has four extendible modules, basic reversible processes algebra (BRPA), algebra of reversible communicating processes (ARCP), recursion and abstraction. Just like process algebra ACP in classical computing, RACP can be treated as an axiomatization foundation for reversible computation.

by <a href="http://arxiv.org/find/cs/1/au:+Wang_Y/0/1/0/all/0/1">Yong Wang</a> at October 21, 2014 01:30 AM

Dynamic Cluster Head Node Election (DCHNE) Model over Wireless Sensor Networks (WSNs). (arXiv:1410.5128v1 [cs.NI])

WSNs are becoming an appealing research area due to their several application domains. The performance of WSNs depends on the topology of sensors and their ability to adapt to changes in the network. Sensor nodes are often resource constrained by their limited power, less communication distance capacity, and restricted sensing capability. Therefore, they need to cooperate with each other to accomplish a specific task. Thus, clustering enables sensor nodes to communicate through the cluster head node for continuous communication process. In this paper, we introduce a dynamic cluster head election mechanism. Each node in the cluster calculates its residual energy value to determine its candidacy to become the Cluster Head Node (CHN). With this mechanism, each sensor node compares its residual energy level to other nodes in the same cluster. Depending on the residual energy level the sensor node acts as the next cluster head. Evaluation of the dynamic CHN election mechanism is conducted using network simulator-2 (ns2). The simulation results demonstrate that the proposed approach prolongs the network lifetime and balancing the energy

by <a href="http://arxiv.org/find/cs/1/au:+Alabass_A/0/1/0/all/0/1">Abeer Alabass</a>, <a href="http://arxiv.org/find/cs/1/au:+Elleithy_K/0/1/0/all/0/1">Khaled Elleithy</a>, <a href="http://arxiv.org/find/cs/1/au:+Razaque_A/0/1/0/all/0/1">Abdul Razaque</a> at October 21, 2014 01:30 AM

Simulation based Study of TCP Variants in Hybrid Network. (arXiv:1410.5127v1 [cs.NI])

Transmission control protocol (TCP) was originally designed for fixed networks to provide the reliability of the data delivery. The improvement of TCP performance was also achieved with different types of networks with introduction of new TCP variants. However, there are still many factors that affect performance of TCP. Mobility is one of the major affects on TCP performance in wireless networks and MANET (Mobile Ad Hoc Network). To determine the best TCP variant from mobility point of view, we simulate some TCP variants in real life scenario. This paper addresses the performance of TCP variants such as TCP-Tahoe, TCP-Reno, TCP-New Reno, TCPVegas,TCP-SACK and TCP-Westwood from mobility point of view.The scenarios presented in this paper are supported by Zone routing Protocol (ZRP) with integration of random waypoint mobility model in MANET area. The scenario shows the speed of walking person to a vehicle and suited particularly for mountainous and deserted areas. On the basis of simulation, we analyze Round trip time (RTT) fairness, End-to-End delay, control overhead, number of broken links during the delivery of data. Finally analyzed parameters help to find out the best TCP variant.

by <a href="http://arxiv.org/find/cs/1/au:+Elmannai_W/0/1/0/all/0/1">Wafa Elmannai</a>, <a href="http://arxiv.org/find/cs/1/au:+Razaque_A/0/1/0/all/0/1">Abdul Razaque</a>, <a href="http://arxiv.org/find/cs/1/au:+Elleithy_K/0/1/0/all/0/1">Khaled Elleithy</a> at October 21, 2014 01:30 AM

Strongly Secure Quantum Ramp Secret Sharing Constructed from Algebraic Curves over Finite Fields. (arXiv:1410.5126v1 [quant-ph])

The first construction of strongly secure quantum ramp secret sharing by Zhang and Matsumoto had an undesirable feature that the dimension of quantum shares must be larger than the number of shares. By using algebraic curves over finite fields, we propose a new construction in which the number of shares can become arbitrarily large for fixed dimension of shares.

by <a href="http://arxiv.org/find/quant-ph/1/au:+Matsumoto_R/0/1/0/all/0/1">Ryutaroh Matsumoto</a> at October 21, 2014 01:30 AM

Unrestricted Termination and Non-Termination Arguments for Bit-Vector Programs. (arXiv:1410.5089v1 [cs.LO])

Proving program termination is typically done by finding a well-founded ranking function for the program states. Existing termination provers typically find ranking functions using either linear algebra or templates. As such they are often restricted to finding linear ranking functions over mathematical integers. This class of functions is insufficient for proving termination of many terminating programs, and furthermore a termination argument for a program operating on mathematical integers does not always lead to a termination argument for the same program operating on fixed-width machine integers. We propose a termination analysis able to generate nonlinear, lexicographic ranking functions and nonlinear recurrence sets that are correct for fixed-width machine arithmetic and floating-point arithmetic Our technique is based on a reduction from program \emph{termination} to second-order \emph{satisfaction}. We provide formulations for termination and non-termination in a fragment of second-order logic with restricted quantification which is decidable over finite domains. The resulted technique is a sound and complete analysis for the termination of finite-state programs with fixed-width integers and IEEE floating-point arithmetic.

by <a href="http://arxiv.org/find/cs/1/au:+David_C/0/1/0/all/0/1">Cristina David</a>, <a href="http://arxiv.org/find/cs/1/au:+Kroening_D/0/1/0/all/0/1">Daniel Kroening</a>, <a href="http://arxiv.org/find/cs/1/au:+Lewis_M/0/1/0/all/0/1">Matt Lewis</a> at October 21, 2014 01:30 AM

Propositional Reasoning about Safety and Termination of Heap-Manipulating Programs. (arXiv:1410.5088v1 [cs.LO])

This paper shows that it is possible to reason about the safety and termination of programs handling potentially cyclic, singly-linked lists using propositional reasoning even when the safety invariants and termination arguments depend on constraints over the lengths of lists. For this purpose, we propose the theory SLH of singly-linked lists with length, which is able to capture non-trivial interactions between shape and arithmetic. When using the theory of bit-vector arithmetic as a background, SLH is efficiently decidable via a reduction to SAT. We show the utility of SLH for software verification by using it to express safety invariants and termination arguments for programs manipulating potentially cyclic, singly-linked lists with unrestricted, unspecified sharing. We also provide an implementation of the decision procedure and use it to check safety and termination proofs for several heap-manipulating programs.

by <a href="http://arxiv.org/find/cs/1/au:+David_C/0/1/0/all/0/1">Cristina David</a>, <a href="http://arxiv.org/find/cs/1/au:+Kroening_D/0/1/0/all/0/1">Daniel Kroening</a>, <a href="http://arxiv.org/find/cs/1/au:+Lewis_M/0/1/0/all/0/1">Matt Lewis</a> at October 21, 2014 01:30 AM

On the Provenance of Linked Data Statistics. (arXiv:1410.5077v1 [cs.DB])

As the amount of linked data published on the web grows, attempts are being made to describe and measure it. However even basic statistics about a graph, such as its size, are difficult to express in a uniform and predictable way. In order to be able to sensibly interpret a statistic it is necessary to know how it was calculate. In this paper we survey the nature of the problem and outline a strategy for addressing it.

by <a href="http://arxiv.org/find/cs/1/au:+Waites_W/0/1/0/all/0/1">William Waites</a> at October 21, 2014 01:30 AM

Abstraction Refinement for Trace Inclusion of Data Automata. (arXiv:1410.5056v1 [cs.LO])

A data automaton is a finite automaton equipped with variables (counters) ranging over a multi-sorted data domain. The transitions of the automaton are controlled by first-order formulae, encoding guards and updates. We observe, in addition to the finite alphabet of actions, the values taken by the counters along a run of the automaton, and consider the data languages recognized by these automata.

The problem addressed in this paper is the inclusion between the data languages recognized by such automata. Since the problem is undecidable, we give an abstraction-refinement semi-algorithm, proved to be sound and complete, but whose termination is not guaranteed.

The novel feature of our technique is checking for inclusion, without attempting to complement one of the automata, i.e.\ working in the spirit of antichain-based non-deterministic inclusion checking for finite automata. The method described here has various applications, ranging from logics of unbounded data structures, such as arrays or heaps, to the verification of real-time systems.

by <a href="http://arxiv.org/find/cs/1/au:+Iosif_R/0/1/0/all/0/1">Radu Iosif</a>, <a href="http://arxiv.org/find/cs/1/au:+Rogalewicz_A/0/1/0/all/0/1">Adam Rogalewicz</a>, <a href="http://arxiv.org/find/cs/1/au:+Vojnar_T/0/1/0/all/0/1">Tomas Vojnar</a> at October 21, 2014 01:30 AM

Axiomatizing Propositional Dependence Logics. (arXiv:1410.5038v1 [cs.LO])

We give sound and complete Hilbert-style axiomatizations for propositional dependence logic (PD), modal dependence logic (MDL), and extended modal dependence logic (EMDL) by extending existing axiomatizations for propositional logic and modal logic. In addition, we give novel labeled tableau calculi for PD, MDL, and EMDL. We prove soundness, completeness and termination for each of the labeled calculi.

by <a href="http://arxiv.org/find/cs/1/au:+Sano_K/0/1/0/all/0/1">Katsuhiko Sano</a>, <a href="http://arxiv.org/find/cs/1/au:+Virtema_J/0/1/0/all/0/1">Jonni Virtema</a> at October 21, 2014 01:30 AM

Unshared Secret Key Cryptography. (arXiv:1410.5021v1 [cs.CR])

Current security techniques can be implemented with either secret key exchange or physical layer wiretap codes. In this work, we investigate an alternative solution for MIMO wiretap channels. Inspired by the artificial noise (AN) technique, we propose the unshared secret key (USK) cryptosystem, where the AN is redesigned as a one-time pad secret key aligned within the null space between transmitter and legitimate receiver. The proposed USK cryptosystem is a new physical layer cryptographic scheme, obtained by combining traditional network layer cryptography and physical layer security. Unlike previously studied artificial noise techniques, rather than ensuring non-zero secrecy capacity, the USK is valid for an infinite lattice input alphabet and guarantees Shannon's ideal secrecy and perfect secrecy, without the need of secret key exchange. We then show how ideal secrecy can be obtained for finite lattice constellations with an arbitrarily small outage.

by <a href="http://arxiv.org/find/cs/1/au:+Liu_S/0/1/0/all/0/1">Shuiyin Liu</a>, <a href="http://arxiv.org/find/cs/1/au:+Hong_Y/0/1/0/all/0/1">Yi Hong</a>, <a href="http://arxiv.org/find/cs/1/au:+Viterbo_E/0/1/0/all/0/1">Emanuele Viterbo</a> at October 21, 2014 01:30 AM

Quantifying performance bottlenecks of stencil computations using the Execution-Cache-Memory model. (arXiv:1410.5010v1 [cs.PF])

Stencil algorithms on regular lattices appear in many fields of computational science, and much effort has been put into optimized implementations. Such activities are usually not guided by performance models that provide estimates of expected speedup. Understanding the performance properties and bottlenecks by performance modeling enables a clear view on promising optimization opportunities. In this work we refine the recently developed Execution-Cache-Memory (ECM) model and use it to quantify the performance bottlenecks of stencil algorithms on a contemporary Intel processor. This includes applying the model to arrive at single-core performance and scalability predictions for typical corner case stencil loop kernels. Guided by the ECM model we accurately quantify the significance of "layer conditions," which are required to estimate the data traffic through the memory hierarchy, and study the impact of typical optimization approaches such as spatial blocking, strength reduction, and temporal blocking for their expected benefits.

by <a href="http://arxiv.org/find/cs/1/au:+Stengel_H/0/1/0/all/0/1">Holger Stengel</a>, <a href="http://arxiv.org/find/cs/1/au:+Treibig_J/0/1/0/all/0/1">Jan Treibig</a>, <a href="http://arxiv.org/find/cs/1/au:+Hager_G/0/1/0/all/0/1">Georg Hager</a>, <a href="http://arxiv.org/find/cs/1/au:+Wellein_G/0/1/0/all/0/1">Gerhard Wellein</a> at October 21, 2014 01:30 AM

Content-Priority based Interest Forwarding in Content Centric Networks. (arXiv:1410.4987v1 [cs.NI])

Content Centric Networking (CCN) is a recent advancement in communication networks where the current research is mainly focusing on routing & cache management strategies of CCN. Nonetheless, other perspectives such as network level security and service quality are also of prime importance; areas which have not been covered deeply so far. This paper introduces an interest forwarding mechanism to process the requests of consumers at a CCN router. Interest packets are forwarded with respect to the priorities of addressed content while the priority level settings are done by content publishers during an initialization phase using a collaborative mechanism of exchanging messages to agree to the priority levels of all content according to the content-nature. Interests with higher priority content are recorded in Pending Interest Table (PIT) as well as forwarded to content publishers prior to those with lower priority content. A simulation study is also conducted to show the effectiveness of proposed scheme and we observe that the interests with higher priority content are satisfied earlier than the interests with lower priority content.

by <a href="http://arxiv.org/find/cs/1/au:+Aamir_M/0/1/0/all/0/1">Muhammad Aamir</a> at October 21, 2014 01:30 AM

Gaussian Process Models with Parallelization and GPU acceleration. (arXiv:1410.4984v1 [cs.DC])

In this work, we present an extension of Gaussian process (GP) models with sophisticated parallelization and GPU acceleration. The parallelization scheme arises naturally from the modular computational structure w.r.t. datapoints in the sparse Gaussian process formulation. Additionally, the computational bottleneck is implemented with GPU acceleration for further speed up. Combining both techniques allows applying Gaussian process models to millions of datapoints. The efficiency of our algorithm is demonstrated with a synthetic dataset. Its source code has been integrated into our popular software library GPy.

by <a href="http://arxiv.org/find/cs/1/au:+Dai_Z/0/1/0/all/0/1">Zhenwen Dai</a>, <a href="http://arxiv.org/find/cs/1/au:+Damianou_A/0/1/0/all/0/1">Andreas Damianou</a>, <a href="http://arxiv.org/find/cs/1/au:+Hensman_J/0/1/0/all/0/1">James Hensman</a>, <a href="http://arxiv.org/find/cs/1/au:+Lawrence_N/0/1/0/all/0/1">Neil Lawrence</a> at October 21, 2014 01:30 AM

On the Relation of Interaction Semantics to Continuations and Defunctionalization. (arXiv:1410.4980v1 [cs.LO])

In game semantics and related approaches to programming language semantics, programs are modelled by interaction dialogues. Such models have recently been used in the design of new compilation methods, e.g. for hardware synthesis or for programming with sublinear space. This paper relates such semantically motivated non-standard compilation methods to more standard techniques in the compilation of functional programming languages, namely continuation passing and defunctionalization. We first show for the linear {\lambda}-calculus that interpretation in a model of computation by interaction can be described as a call-by-name CPS-translation followed by a defunctionalization procedure that takes into account control-flow information. We then establish a relation between these two compilation methods for the simply-typed {\lambda}-calculus and end by considering recursion.

by <a href="http://arxiv.org/find/cs/1/au:+Schopp_U/0/1/0/all/0/1">Ulrich Sch&#xf6;pp</a> at October 21, 2014 01:30 AM

Privacy Leakage in Mobile Computing: Tools, Methods, and Characteristics. (arXiv:1410.4978v1 [cs.CR])

The number of smartphones, tablets, sensors, and connected wearable devices are rapidly increasing. Today, in many parts of the globe, the penetration of mobile computers has overtaken the number of traditional personal computers. This trend and the always-on nature of these devices have resulted in increasing concerns over the intrusive nature of these devices and the privacy risks that they impose on users or those associated with them. In this paper, we survey the current state of the art on mobile computing research, focusing on privacy risks and data leakage effects. We then discuss a number of methods, recommendations, and ongoing research in limiting the privacy leakages and associated risks by mobile computing.

by <a href="http://arxiv.org/find/cs/1/au:+Haris_M/0/1/0/all/0/1">Muhammad Haris</a>, <a href="http://arxiv.org/find/cs/1/au:+Haddadi_H/0/1/0/all/0/1">Hamed Haddadi</a>, <a href="http://arxiv.org/find/cs/1/au:+Hui_P/0/1/0/all/0/1">Pan Hui</a> at October 21, 2014 01:30 AM

Semantic Gateway as a Service architecture for IoT Interoperability. (arXiv:1410.4977v1 [cs.NI])

The Internet of Things (IoT) is set to occupy a substantial component of future Internet. The IoT connects sensors and devices that record physical observations to applications and services of the Internet. As a successor to technologies such as RFID and Wireless Sensor Networks (WSN), the IoT has stumbled into vertical silos of proprietary systems, providing little or no interoperability with similar systems. As the IoT represents future state of the Internet, an intelligent and scalable architecture is required to provide connectivity between these silos, enabling discovery of physical sensors and interpretation of messages between things. This paper proposes a gateway and Semantic Web enabled IoT architecture to provide interoperability between systems using established communication and data standards. The Semantic Gateway as Service (SGS) allows translation between messaging protocols such as XMPP, CoAP and MQTT via a multi-protocol proxy architecture. Utilization of broadly accepted specifications such as W3C's Semantic Sensor Network (SSN) ontology for semantic annotations of sensor data provide semantic interoperability between messages and support semantic reasoning to obtain higher-level actionable knowledge from low-level sensor data.

by <a href="http://arxiv.org/find/cs/1/au:+Desai_P/0/1/0/all/0/1">Pratikkumar Desai</a>, <a href="http://arxiv.org/find/cs/1/au:+Sheth_A/0/1/0/all/0/1">Amit Sheth</a>, <a href="http://arxiv.org/find/cs/1/au:+Anantharam_P/0/1/0/all/0/1">Pramod Anantharam</a> at October 21, 2014 01:30 AM

Pirus: A Web-based File Hosting Service with Object Oriented Logic in Cloud Computing. (arXiv:1410.4967v1 [cs.SE])

In this paper a new Web-based File Hosting Service with Object Oriented Logic in Cloud Computing called Pirus was developed. The service will be used by the academic community of the University of Piraeus giving users the ability to remotely store and access their personal files with no security compromises. It also offers the administrators the ability to manage users and roles. The objective was to deliver a fully operational service, using state-of-the-art programming techniques to enable scalability and future development of the existing functionality. The use of technologies such as .NET Framework, C# programming language, CSS and jQuery, MSSQL for database hosting and the support of Virtualization and Cloud Computing will contribute significantly in compatibility, code reuse, reliability and reduce of maintenance costs and resources. The service was installed and tested in a controlled environment to ascertain the required functionality and the offered reliability and safety with complete success.

The technologies used and supported, allow future work in upgrading and extending the service. Changes and improvements, in hardware and software, in order to convert the service to a SaaS (Software as a Service) Cloud application is a logical step in order to efficiently offer the service to a wider community. Improved and added functionality offered by further development will leverage the user experience.

by <a href="http://arxiv.org/find/cs/1/au:+Kallergis_D/0/1/0/all/0/1">Dimitrios Kallergis</a>, <a href="http://arxiv.org/find/cs/1/au:+Chimos_K/0/1/0/all/0/1">Konstantinos Chimos</a>, <a href="http://arxiv.org/find/cs/1/au:+Stefanos_V/0/1/0/all/0/1">Vizikidis Stefanos</a>, <a href="http://arxiv.org/find/cs/1/au:+Karvounidis_T/0/1/0/all/0/1">Theodoros Karvounidis</a>, <a href="http://arxiv.org/find/cs/1/au:+Douligeris_C/0/1/0/all/0/1">Christos Douligeris</a> at October 21, 2014 01:30 AM

Transforming while/do/for/foreach-Loops into Recursive Methods. (arXiv:1410.4956v2 [cs.PL] UPDATED)

In software engineering, taking a good election between recursion and iteration is essential because their efficiency and maintenance are different. In fact, developers often need to transform iteration into recursion (e.g., in debugging, to decompose the call graph into iterations); thus, it is quite surprising that there does not exist a public transformation from loops to recursion that handles all kinds of loops. This article describes a transformation able to transform iterative loops into equivalent recursive methods. The transformation is described for the programming language Java, but it is general enough as to be adapted to many other languages that allow iteration and recursion. We describe the changes needed to transform loops of types while/do/for/foreach into recursion. Each kind of loop requires a particular treatment that is described and exemplified.

by <a href="http://arxiv.org/find/cs/1/au:+Insa_D/0/1/0/all/0/1">David Insa</a>, <a href="http://arxiv.org/find/cs/1/au:+Silva_J/0/1/0/all/0/1">Josep Silva</a> at October 21, 2014 01:30 AM

Near-Optimal Scheduler Synthesis for LTL with Future Discounting. (arXiv:1410.4950v1 [cs.LO])

We study synthesis of optimal schedulers for the linear temporal logic (LTL) with future discounting. The logic, introduced by Almagor, Boker and Kupferman, is a quantitative variant of LTL in which an event in the far future has only discounted contribution to a truth value (that is a real number in the unit interval [0,1]). The precise problem we study---it naturally arises e.g. in search for a scheduler that recovers from an internal error state as soon as possible---is the following: given a Kripke frame, a formula and a number in [0, 1] called a margin, find a path of the Kripke frame that is optimal with respect to the formula up to the prescribed margin (a truly optimal path may not exist). We present an algorithm for the problem: it relies on a translation to quantitative automata and their optimal value problem, a technique that is potentially useful also in other settings of optimality synthesis.

by <a href="http://arxiv.org/find/cs/1/au:+Nakagawa_S/0/1/0/all/0/1">Shota Nakagawa</a>, <a href="http://arxiv.org/find/cs/1/au:+Hasuo_I/0/1/0/all/0/1">Ichiro Hasuo</a> at October 21, 2014 01:30 AM

Type-Directed Compilation for Fault-Tolerant Non-Interference. (arXiv:1410.4917v1 [cs.CR])

Environmentalnoise(e.g.heat,ionizedparticles,etc.)causes transient faults in hardware, which lead to corruption of stored values. Mission-critical devices require such faults to be mitigated by fault-tolerance --- a combination of techniques that aim at preserving the functional behaviour of a system despite the disruptive effects of transient faults. Fault-tolerance typically has a high deployment cost -- special hardware might be required to implement it -- and provides weak statistical guarantees. It is also based on the assumption that faults are rare. In this paper, we consider scenarios where security, rather than functional correctness, is the main asset to be protected. Our contribution is twofold. Firstly, we develop a theory for expressing confidentiality of data in the presence of transient faults. We show that the natural probabilistic definition of security in the presence of faults can be captured by a possibilistic definition. Furthermore, the possibilistic definition is implied by a known bisimulation-based property, called Strong Security. Secondly, we illustrate the utility of these results for a simple RISC architecture for which only the code memory and program counter are assumed fault-tolerant. We present a type-directed compilation scheme that produces RISC code from a higher-level language for which Strong Security holds --- i.e. well-typed programs compile to RISC code which is secure despite transient faults. In contrast with fault-tolerance solutions, our technique assumes relatively little special hardware, gives formal guarantees, and works in the presence of an active attacker who aggressively targets parts of a system and induces faults precisely.

by <a href="http://arxiv.org/find/cs/1/au:+Tedesco_F/0/1/0/all/0/1">Filippo Del Tedesco</a>, <a href="http://arxiv.org/find/cs/1/au:+Sands_D/0/1/0/all/0/1">David Sands</a>, <a href="http://arxiv.org/find/cs/1/au:+Russo_A/0/1/0/all/0/1">Alejandro Russo</a> at October 21, 2014 01:30 AM

Fast Parallel Algorithm for Enumerating All Chordless Cycles in Graphs. (arXiv:1410.4876v1 [cs.DC])

Finding chordless cycles is an important theoretical problem in the Graph Theory area. It also can be applied to practical problems such as discover which predators compete for the same food in ecological networks. Motivated by the problem of theoretical interest and also by its significant practical importance, we present in this paper a parallel algorithm for enumerating all the chordless cycles in undirected graphs, which was implemented in OpenCL.

by <a href="http://arxiv.org/find/cs/1/au:+Dias_E/0/1/0/all/0/1">Elis&#xe2;ngela Silva Dias</a>, <a href="http://arxiv.org/find/cs/1/au:+Castonguay_D/0/1/0/all/0/1">Diane Castonguay</a>, <a href="http://arxiv.org/find/cs/1/au:+Longo_H/0/1/0/all/0/1">Humberto Longo</a>, <a href="http://arxiv.org/find/cs/1/au:+Jradi_W/0/1/0/all/0/1">Walid Abdala Rfaei Jradi</a>, <a href="http://arxiv.org/find/cs/1/au:+Nascimento_H/0/1/0/all/0/1">Hugo A. D. do Nascimento</a> at October 21, 2014 01:30 AM

Inequality and Network Formation Games. (arXiv:1303.1434v2 [cs.GT] UPDATED)

This paper addresses the matter of inequality in network formation games. We employ a quantity that we are calling the Nash Inequality Ratio (NIR), defined as the maximal ratio between the highest and lowest costs incurred to individual agents in a Nash equilibrium strategy, to characterize the extent to which inequality is possible in equilibrium. We give tight upper bounds on the NIR for the network formation games of Fabrikant et al. (PODC '03) and Ehsani et al. (SPAA '11). With respect to the relationship between equality and social efficiency, we show that, contrary to common expectations, efficiency does not necessarily come at the expense of increased inequality.

by <a href="http://arxiv.org/find/cs/1/au:+Johnson_S/0/1/0/all/0/1">Samuel D. Johnson</a>, <a href="http://arxiv.org/find/cs/1/au:+DSouza_R/0/1/0/all/0/1">Raissa M. D&#x27;Souza</a> at October 21, 2014 01:30 AM

Graph Products Revisited: Tight Approximation Hardness of Induced Matching, Poset Dimension and More. (arXiv:1212.4129v2 [cs.DM] UPDATED)

Graph product is a fundamental tool with rich applications in both graph theory and theoretical computer science. It is usually studied in the form $f(G*H)$ where $G$ and $H$ are graphs, * is a graph product and $f$ is a graph property. For example, if $f$ is the independence number and * is the disjunctive product, then the product is known to be multiplicative: $f(G*H)=f(G)f(H)$.

In this paper, we study graph products in the following non-standard form: $f((G\oplus H)*J)$ where $G$, $H$ and $J$ are graphs, $\oplus$ and * are two different graph products and $f$ is a graph property. We show that if $f$ is the induced and semi-induced matching number, then for some products $\oplus$ and *, it is subadditive in the sense that $f((G\oplus H)*J)\leq f(G*J)+f(H*J)$. Moreover, when $f$ is the poset dimension number, it is almost subadditive.

As applications of this result (we only need $J=K_2$ here), we obtain tight hardness of approximation for various problems in discrete mathematics and computer science: bipartite induced and semi-induced matching (a.k.a. maximum expanding sequences), poset dimension, maximum feasible subsystem with 0/1 coefficients, unit-demand min-buying and single-minded pricing, donation center location, boxicity, cubicity, threshold dimension and independent packing.

by <a href="http://arxiv.org/find/cs/1/au:+Chalermsook_P/0/1/0/all/0/1">Parinya Chalermsook</a>, <a href="http://arxiv.org/find/cs/1/au:+Laekhanukit_B/0/1/0/all/0/1">Bundit Laekhanukit</a>, <a href="http://arxiv.org/find/cs/1/au:+Nanongkai_D/0/1/0/all/0/1">Danupon Nanongkai</a> at October 21, 2014 01:30 AM

Degenerate-elliptic operators in mathematical finance and higher-order regularity for solutions to variational equations. (arXiv:1208.2658v3 [math.AP] UPDATED)

We establish higher-order weighted Sobolev and Holder regularity for solutions to variational equations defined by the elliptic Heston operator, a linear second-order degenerate-elliptic operator arising in mathematical finance. Furthermore, given $C^\infty$-smooth data, we prove $C^\infty$-regularity of solutions up to the portion of the boundary where the operator is degenerate. In mathematical finance, solutions to obstacle problems for the elliptic Heston operator correspond to value functions for perpetual American-style options on the underlying asset.

by <a href="http://arxiv.org/find/math/1/au:+Feehan_P/0/1/0/all/0/1">Paul M. N. Feehan</a>, <a href="http://arxiv.org/find/math/1/au:+Pop_C/0/1/0/all/0/1">Camelia A. Pop</a> at October 21, 2014 01:30 AM

TheoryOverflow

Aggregated Analysis [on hold]

enter image description here

The answer is

..................................

enter image description here

I am trying to study the answer but I have couple of confusions. how did they come with (n-1/2) / (1-1/2) in the 3rd line of the answer. What geometric sum series did they use?

by Roy Kesserwani at October 21, 2014 01:19 AM

StackOverflow

Clojure using ref and alter for multi-thread states

I am trying to do something trivial, calculate something using agents, if the final agent value is smaller than some ref variable, update ref variable.

I having trouble finding a way to update the ref variable to "swap".

(def shortest (ref [1 2 3 4 5])
(def var1 (ref [[1 2 3]]))
(def transfer [avar]
    (dosync
         (if (< (count var1) (count shortest)
             (alter shortest @avar); or whatever is appropriate!
         )
    )

) 

I thought swap! would work but that's for atom only. (and i'm not sure it would work)

by user1639926 at October 21, 2014 01:12 AM

DSL syntax with optional parameters

I'm trying to handle following DSL:

(simple-query 
  (is :category "car/audi/80")
  (is :price 15000))

that went quite smooth, so I added one more thing - options passed to the query:

(simple-query {:page 1 :limit 100}
  (is :category "car/audi/80")
  (is :price 15000))

and now I have a problem how to handle this case in most civilized way. as you can see simple-query may get hash-map as a first element (followed by long list of criteria) or may have no hash-mapped options at all. moreover, I would like to have defaults as a default set of options in case when some (or all) of them are not provided explicite in query.

this is what I figured out:

(def ^{:dynamic true} *defaults* {:page 1 
                                  :limit 50})

(defn simple-query [& body]
  (let [opts (first body) 
        [params criteria] (if (map? opts) 
                             [(merge *defaults* opts) (rest body)]
                             [*defaults* body])]
       (execute-query params criteria)))

I feel it's kind of messy. any idea how to simplify this construction?

by Michal at October 21, 2014 01:08 AM

Scala Mutable Option?

I want something like this:

private val cachedResponse = mutable.Option.empty[A]

def get: A = cachedResponse getOrElseUpdate db.findModel()

def update: Unit = {
  db.updateModel
  cachedResponse.empty()    // set it to None/Option.empty
}

I am not looking for a generic HashMap based memoization like this. I tried implementing it using a var Option[A] but it did not look very idiomatic to me:

private var cachedResponse: Option[A] = None

def get: A = cachedResponse getOrElse {
 cachedResponse = Option(db.findModel())
 cachedResponse.get
}

def update: Unit = {
  db.updateModel
  cachedResponse = None
}

by wrick at October 21, 2014 12:47 AM

Planet Theory

A Multilevel Bilinear Programming Algorithm For the Vertex Separator Problem

Authors: William W. Hager, James T. Hungerford, Ilya Safro
Download: PDF
Abstract: The Vertex Separator Problem for a graph is to find the smallest collection of vertices whose removal breaks the graph into two disconnected subsets that satisfy specified size constraints. In the paper 10.1016/j.ejor.2014.05.042, the Vertex Separator Problem was formulated as a continuous (non-concave/non-convex) bilinear quadratic program. In this paper, we develop a more general continuous bilinear program which incorporates vertex weights, and which applies to the coarse graphs that are generated in a multilevel compression of the original Vertex Separator Problem. A Mountain Climbing Algorithm is used to find a stationary point of the continuous bilinear quadratic program, while second-order optimality conditions and perturbation techniques are used to escape from either a stationary point or a local maximizer. The algorithms for solving the continuous bilinear program are employed during the solution and refinement phases in a multilevel scheme. Computational results and comparisons demonstrate the advantage of the proposed algorithm.

October 21, 2014 12:42 AM

On Succinct Representations of Binary Trees

Authors: Pooya Davoodi, Rajeev Raman, Srinivasa Rao Satti
Download: PDF
Abstract: We observe that a standard transformation between \emph{ordinal} trees (arbitrary rooted trees with ordered children) and binary trees leads to interesting succinct binary tree representations. There are four symmetric versions of these transformations. Via these transformations we get four succinct representations of $n$-node binary trees that use $2n + n/(\log n)^{O(1)}$ bits and support (among other operations) navigation, inorder numbering, one of pre- or post-order numbering, subtree size and lowest common ancestor (LCA) queries. The ability to support inorder numbering is crucial for the well-known range-minimum query (RMQ) problem on an array $A$ of $n$ ordered values. While this functionality, and more, is also supported in $O(1)$ time using $2n + o(n)$ bits by Davoodi et al.'s (\emph{Phil. Trans. Royal Soc. A} \textbf{372} (2014)) extension of a representation by Farzan and Munro (\emph{Algorithmica} \textbf{6} (2014)), their \emph{redundancy}, or the $o(n)$ term, is much larger, and their approach may not be suitable for practical implementations.

One of these transformations is related to the Zaks' sequence (S.~Zaks, \emph{Theor. Comput. Sci.} \textbf{10} (1980)) for encoding binary trees, and we thus provide the first succinct binary tree representation based on Zaks' sequence. Another of these transformations is equivalent to Fischer and Heun's (\emph{SIAM J. Comput.} \textbf{40} (2011)) \minheap\ structure for this problem. Yet another variant allows an encoding of the Cartesian tree of $A$ to be constructed from $A$ using only $O(\sqrt{n} \log n)$ bits of working space.

October 21, 2014 12:42 AM

Improved Region-Growing and Combinatorial Algorithms for $k$-Route Cut Problems

Authors: Guru Guruganesh, Laura Sanita, Chaitanya Swamy
Download: PDF
Abstract: We study the {\em $k$-route} generalizations of various cut problems, the most general of which is \emph{$k$-route multicut} ($k$-MC) problem, wherein we have $r$ source-sink pairs and the goal is to delete a minimum-cost set of edges to reduce the edge-connectivity of every source-sink pair to below $k$. The $k$-route extensions of multiway cut ($k$-MWC), and the minimum $s$-$t$ cut problem ($k$-$(s,t)$-cut), are similarly defined. We present various approximation and hardness results for these $k$-route cut problems that improve the state-of-the-art for these problems in several cases. (i) For {\em $k$-route multiway cut}, we devise simple, but surprisingly effective, combinatorial algorithms that yield bicriteria approximation guarantees that markedly improve upon the previous-best guarantees. (ii) For {\em $k$-route multicut}, we design algorithms that improve upon the previous-best approximation factors by roughly an $O(\sqrt{\log r})$-factor, when $k=2$, and for general $k$ and unit costs and any fixed violation of the connectivity threshold $k$. The main technical innovation is the definition of a new, powerful \emph{region growing} lemma that allows us to perform region-growing in a recursive fashion even though the LP solution yields a {\em different metric} for each source-sink pair. (iii) We complement these results by showing that the {\em $k$-route $s$-$t$ cut} problem is at least as hard to approximate as the {\em densest-$k$-subgraph} (DkS) problem on uniform hypergraphs.

October 21, 2014 12:42 AM

Scalable Parallel Factorizations of SDD Matrices and Efficient Sampling for Gaussian Graphical Models

Authors: Dehua Cheng, Yu Cheng, Yan Liu, Richard Peng, Shang-Hua Teng
Download: PDF
Abstract: Motivated by a sampling problem basic to computational statistical inference, we develop a nearly optimal algorithm for a fundamental problem in spectral graph theory and numerical analysis. Given an $n\times n$ SDDM matrix ${\bf \mathbf{M}}$, and a constant $-1 \leq p \leq 1$, our algorithm gives efficient access to a sparse $n\times n$ linear operator $\tilde{\mathbf{C}}$ such that $${\mathbf{M}}^{p} \approx \tilde{\mathbf{C}} \tilde{\mathbf{C}}^\top.$$ The solution is based on factoring ${\bf \mathbf{M}}$ into a product of simple and sparse matrices using squaring and spectral sparsification. For ${\mathbf{M}}$ with $m$ non-zero entries, our algorithm takes work nearly-linear in $m$, and polylogarithmic depth on a parallel machine with $m$ processors. This gives the first sampling algorithm that only requires nearly linear work and $n$ i.i.d. random univariate Gaussian samples to generate i.i.d. random samples for $n$-dimensional Gaussian random fields with SDDM precision matrices. For sampling this natural subclass of Gaussian random fields, it is optimal in the randomness and nearly optimal in the work and parallel complexity. In addition, our sampling algorithm can be directly extended to Gaussian random fields with SDD precision matrices.

October 21, 2014 12:42 AM

An Improved Scheme for Asymmetric LSH

Authors: Anshumali Shrivastava, Ping Li
Download: PDF
Abstract: A recent technical report developed a provably sublinear time algorithm for approximate \emph{Maximum Inner Product Search} (MIPS), by observing that inner products, after independent asymmetric transformations, can be converted into the problem of approximate near neighbor search in terms of the $L_2$ distance. We name the particular ALSH scheme in\cite{Report:ALSH_arXiv14} as {\em L2-ALSH}. In this study, we present another asymmetric transformation scheme which converts the problem of maximum inner products into the problem of maximum correlation search. The latter can be solved efficiently by "sign random projections". We name this new scheme as {\em Sign-ALSH}. Theoretical analysis shows that {\em Sign-ALSH} can be noticeably more advantageous than {\em L2-ALSH}. Our experimental study confirms the theoretical finding.

October 21, 2014 12:41 AM

Settling the Randomized k-sever Conjecture on Some Special Metrics

Authors: Wenbin Chen
Download: PDF
Abstract: In this paper, we settle the randomized $k$-sever conjecture for the following metric spaces: line, circle, Hierarchically well-separated tree (HST), if $k=2$ or $n=k+1$ for arbitrary metric spaces. Specially, we show that there are $O(\log k)$-competitive randomized $k$-sever algorithms for above metric spaces. For any general metric space with $n$ points, we show that there is an $O( \log k \log n)$-competitive randomized $k$-sever algorithm.

October 21, 2014 12:41 AM

On the Influence of Graph Density on Randomized Gossiping

Authors: Robert Elsässer, Dominik Kaaser
Download: PDF
Abstract: Information dissemination is a fundamental problem in parallel and distributed computing. In its simplest variant, the broadcasting problem, a message has to be spread among all nodes of a graph. A prominent communication protocol for this problem is based on the random phone call model (Karp et al., FOCS 2000). In each step, every node opens a communication channel to a randomly chosen neighbor for bi-directional communication.

Motivated by replicated databases and peer-to-peer networks, Berenbrink et al., ICALP 2010, considered the gossiping problem in the random phone call model. There, each node starts with its own message and all messages have to be disseminated to all nodes in the network. They showed that any $O(\log n)$-time algorithm in complete graphs requires $\Omega(\log n)$ message transmissions per node to complete gossiping, w.h.p, while for broadcasting the average number of transmissions per node is $O(\log\log n)$.

It is known that the $O(n\log\log n)$ bound on the number of transmissions required for randomized broadcasting in complete graphs cannot be achieved in sparse graphs even if they have best expansion and connectivity properties. In this paper, we analyze whether a similar influence of the graph density also holds w.r.t. the performance of gossiping. We study analytically and empirically the communication overhead generated by randomized gossiping in random graphs and consider simple modifications of the random phone call model in these graphs. Our results indicate that, unlike in broadcasting, there is no significant difference between the performance of randomized gossiping in complete graphs and sparse random graphs. Furthermore, our simulations indicate that by tuning the parameters of our algorithms, we can significantly reduce the communication overhead compared to the traditional push-pull approach in the graphs we consider.

October 21, 2014 12:41 AM

Solving Parameterized Problems by Mixing Color Coding-Related Techniques

Authors: Meirav Zehavi
Download: PDF
Abstract: We introduce a family of strategies, that we call mixing strategies, for applying color coding-related techniques, developing fast parameterized algorithms. Our strategies combine the following ideas.

* Mixing narrow sieves and representative sets, two independent color coding-related techniques.

* For certain "disjointness conditions", improving the best known computation of representative sets.

* Mixing divide-and-color-based preprocessing with the computation mentioned in the previous item, speeding-up standard representative sets-based algorithms.

* Cutting the universe into small pieces in two special manners, one used in the mix mentioned in the previous item, and the other mixed with a non-standard representative sets-based algorithm to improve its running time.

We demonstrate the usefulness of our strategies by obtaining the following results. We first solve the well-studied k-Internal Out-Branching problem in deterministic time $O^*(5.139^k)$ and randomized time $O^*(3.617^k)$, improving upon the previous best deterministic time $O^*(6.855^k)$ and randomized time $O^*(4^k)$. To this end, we establish a relation between "problematic" out-trees and maximum matching computations in graphs. We then present a unified approach to improve the $O^*$ running times of the previous best deterministic algorithms for the classic k-Path, k-Tree, r-Dimensional k-Matching and Graph Motif problems, including their weighted versions, from $O^*(2.619^k)$, $O^*(2.619^k)$, $O^*(2.619^{(r-1)k})$ and $O^*(2.619^{2k})$ to $O^*(2.597^k)$, $O^*(2.597^k)$, $O^*(2.597^{(r-1)k})$ and $O^*(2.597^{2k})$, respectively. Finally, we solve the Weighted 3-Set k-Packing problem in deterministic time $O^*(8.097^k)$, improving upon the previous best $O^*(12.155^k)$ deterministic time.

October 21, 2014 12:41 AM

Proof Complexity Modulo the Polynomial Hierarchy: Understanding Alternation as a Source of Hardness

Authors: Hubie Chen
Download: PDF
Abstract: We present and study a framework in which one can present alternation-based lower bounds on proof length in proof systems for quantified Boolean formulas. A key notion in this framework is that of proof system ensemble, which is (essentially) a sequence of proof systems where, for each, proof checking can be performed in the polynomial hierarchy. We introduce a proof system ensemble called relaxing QU-res which is based on the established proof system QU-resolution. Our main results include an exponential separation of the tree-like and general versions of relaxing QU-res, and an exponential lower bound for relaxing QU-res; these are analogs of classical results in propositional proof complexity.

October 21, 2014 12:41 AM

Decidable Fragments of Logics Based on Team Semantics

Authors: Juha Kontinen, Antti Kuusisto, Jonni Virtema
Download: PDF
Abstract: We study the complexity of variants of dependence logic defined by generalized dependency atoms. Let FOC^2 denote two-variable logic with counting, and let ESO(FOC^2) be the extension of FOC^2 with existential second-order prenex quantification. We show that for any finite collection A of atoms that are definable in ESO(FOC^2), the satisfiability problem of the two-variable fragment of FO(A) is NEXPTIME-complete. We also study satisfiability of sentences of FO(A) in the Bernays-Sch\"onfinkel-Ramsey prefix class. Our results show that, analogously to the first-order case, this problem is decidable assuming the atoms in A are uniformly polynomial time computable and closed under substructures. We establish inclusion in 2NEXPTIME. For fixed arity vocabularies, we establish NEXPTIME-completeness.

October 21, 2014 12:41 AM

Linearizability is EXPSPACE-complete

Authors: Jad Hamza
Download: PDF
Abstract: It was shown in Alur et al. [1] that the problem of verifying finite concurrent systems through Linearizability is in EXPSPACE. However, there was still a complexity gap between the easy to obtain PSPACE lower bound and the EXPSPACE upper bound. We show in this paper that Linearizability is EXPSPACE-complete.

October 21, 2014 12:41 AM

A Polynomial Time Algorithm For The Conjugacy Decision and Search Problems in Free Abelian-by-Infinite Cyclic Groups

Authors: Bren Cavallo, Delaram Kahrobaei
Download: PDF
Abstract: In this paper we introduce a polynomial time algorithm that solves both the conjugacy decision and search problems in free abelian-by-infinite cyclic groups where the input is elements in normal form. We do this by adapting the work of Bogopolski, Martino, Maslakova, and Ventura in \cite{bogopolski2006conjugacy} and Bogopolski, Martino, and Ventura in \cite{bogopolski2010orbit}, to free abelian-by-infinite cyclic groups, and in certain cases apply a polynomial time algorithm for the orbit problem over $\Z^n$ by Kannan and Lipton.

October 21, 2014 12:41 AM

Hardness of Peeling with Stashes

Authors: Michael Mitzenmacher, Vikram Nathan
Download: PDF
Abstract: The analysis of several algorithms and data structures can be framed as a peeling process on a random hypergraph: vertices with degree less than k and their adjacent edges are removed until no vertices of degree less than k are left. Often the question is whether the remaining hypergraph, the k-core, is empty or not. In some settings, it may be possible to remove either vertices or edges from the hypergraph before peeling, at some cost. For example, in hashing applications where keys correspond to edges and buckets to vertices, one might use an additional side data structure, commonly referred to as a stash, to separately handle some keys in order to avoid collisions. The natural question in such cases is to find the minimum number of edges (or vertices) that need to be stashed in order to realize an empty k-core. We show that both these problems are NP-complete for all $k \geq 2$ on graphs and regular hypergraphs, with the sole exception being that the edge variant of stashing is solvable in polynomial time for $k = 2$ on standard (2-uniform) graphs.

October 21, 2014 12:41 AM

On the Hardness of Bribery Variants in Voting with CP-Nets

Authors: Britta Dorn, Dominikus Krüger
Download: PDF
Abstract: We continue previous work by Mattei et al. (Mattei, N., Pini, M., Rossi, F., Venable, K.: Bribery in voting with CP-nets. Ann. of Math. and Artif. Intell. pp. 1--26 (2013)) in which they study the computational complexity of bribery schemes when voters have conditional preferences that are modeled by CP-nets. For most of the cases they considered, they could show that the bribery problem is solvable in polynomial time. Some cases remained open---we solve two of them and extend the previous results to the case that voters are weighted. Moreover, we consider negative (weighted) bribery in CP-nets, when the briber is not allowed to pay voters to vote for his preferred candidate.

October 21, 2014 12:40 AM

Finitely unstable theories and computational complexity

Authors: Tuomo Kauranne
Download: PDF
Abstract: The complexity class $NP$ can be logically characterized both through existential second order logic $SO\exists$, as proven by Fagin, and through simulating a Turing machine via the satisfiability problem of propositional logic SAT, as proven by Cook. Both theorems involve encoding a Turing machine by a formula in the corresponding logic and stating that a model of this formula exists if and only if the Turing machine halts, i.e. the formula is satisfiable iff the Turing machine accepts its input. Trakhtenbrot's theorem does the same in first order logic $FO$. Such different orders of encoding are possible because the set of all possible configurations of any Turing machine up to any given finite time instant can be defined by a finite set of propositional variables, or is locally represented by a model of fixed finite size. In the current paper, we first encode such time-limited computations of a deterministic Turing machine (DTM) in first order logic. We then take a closer look at DTMs that solve SAT. When the length of the input string to such a DTM that contains effectively encoded instances of SAT is parameterized by the natural number $M$, we proceed to show that the corresponding $FO$ theory $SAT_M$ has a lower bound on the size of its models that grows almost exponentially with $M$. This lower bound on model size also translates into a lower bound on the deterministic time complexity of SAT.

October 21, 2014 12:40 AM

StackOverflow

Handle Joda DateTime with Anorm 2.3

I'm brand new to Play! and I'm using the version 2.3.4.

So far I used the java.util.Date type without problem but I finally want to use a DateTime type.

So I'm trying to use the org.joda.time.DateTime type but anorm doesn't know how to handle this type, I get this error : could not find implicit value for parameter extractor: anorm.Column[org.joda.time.DateTime].

The part of the code giving an error is :

private val ArtistParser: RowParser[Artist] = {
    get[Long]("artistId") ~
    get[DateTime]("creationDateTime") map {
        case artistId ~ creationDateTime =>
        Artist(artistId, creationDateTime)
    }
}

My class is simply :

case class Artist (artistId: Long, creationDateTime: DateTime)

I have been searching a solution for a long time and I looked in particular at this post : Joda DateTime Field on Play Framework 2.0's Anorm but I think that it doesn't work with play 2.3.4 (at least I didn't manage to make it work).

So my question is how do you handle DateTime with play scala 2.3? Is there an easiest way to proceed? And if not what should I do in order to anorm to handle correctly the DateTime type?

by user3683807 at October 21, 2014 12:34 AM

Why eta-expansion doesn't work with implicitly added members

This doesn't work:

"%-10s %-50s %s".format _
<console>:13: error: missing arguments for method format in trait StringLike;
 follow this method with `_' if you want to treat it as a partially applied function
          "%-10s %-50s %s".format _

But this works:

import scala.collection.immutable._

scala> ("%-10s %-50s %s": StringLike[_]).format _
res91: Seq[Any] => String = <function1>

So, why i have to specify type class explicitly?

by dk14 at October 21, 2014 12:33 AM

How to apply tuple to a format string in Scala?

Edit1

I have already seen this question: Applying a function to a tuple in Scala

Ideally I would simply like to do like this:

scala> val t = ("A", "B", "C")
t: (java.lang.String, java.lang.String, java.lang.String) = (A,B,C)

scala> "%-10s %-50s %s".format(t) // or some closer syntax

Which should give output as

res12: String = A          B                                                  C

Edit2

Or in some sense Scala complier should be able to infer that I am actually calling with correct arguments and types such that

"%-10s %-50s %s".format(t.untuple) expands to "%-10s %-50s %s".format(t._1, t._2, t._3)

Can I use a macro to do this?

Original question follows

I have a tuple which I use for formatting a string:

scala> val t = ("A", "B", "C")
t: (java.lang.String, java.lang.String, java.lang.String) = (A,B,C)

scala> "%-10s %-50s %s".format(t.productElements.toList: _*)
warning: there were 1 deprecation warnings; re-run with -deprecation for details
res10: String = A          B                                                  C

scala> "%-10s %-50s %s".format(t._1, t._2, t._3)
res11: String = A          B                                                  C

All works fine till now. But this fails:

scala> val f = "%-10s %-50s %s".format(_)
f: Any* => String = <function1>

scala> f(t.productElements.toList: _*)
warning: there were 1 deprecation warnings; re-run with -deprecation for details
java.util.MissingFormatArgumentException: Format specifier '-50s'
    at java.util.Formatter.format(Formatter.java:2487)
    at java.util.Formatter.format(Formatter.java:2423)
    at java.lang.String.format(String.java:2797)
    at scala.collection.immutable.StringLike$class.format(StringLike.scala:270)
    at scala.collection.immutable.StringOps.format(StringOps.scala:31)
    at $anonfun$1.apply(<console>:7)
    at $anonfun$1.apply(<console>:7)
    at .<init>(<console>:10)
    at .<clinit>(<console>)
    at .<init>(<console>:11)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:704)
    at scala.tools.nsc.interpreter.IMain$Request$$anonfun$14.apply(IMain.scala:920)
    at scala.tools.nsc.interpreter.Line$$anonfun$1.apply$mcV$sp(Line.scala:43)
    at scala.tools.nsc.io.package$$anon$2.run(package.scala:25)
    at java.lang.Thread.run(Thread.java:744)

This also fails:

scala> f.apply(t)
java.util.MissingFormatArgumentException: Format specifier '-50s'
    at java.util.Formatter.format(Formatter.java:2487)
    at java.util.Formatter.format(Formatter.java:2423)
    at java.lang.String.format(String.java:2797)
    at scala.collection.immutable.StringLike$class.format(StringLike.scala:270)
    at scala.collection.immutable.StringOps.format(StringOps.scala:31)
    at $anonfun$1.apply(<console>:7)
    at $anonfun$1.apply(<console>:7)
    at .<init>(<console>:10)
    at .<clinit>(<console>)
    at .<init>(<console>:11)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:704)
    at scala.tools.nsc.interpreter.IMain$Request$$anonfun$14.apply(IMain.scala:920)
    at scala.tools.nsc.interpreter.Line$$anonfun$1.apply$mcV$sp(Line.scala:43)
    at scala.tools.nsc.io.package$$anon$2.run(package.scala:25)
    at java.lang.Thread.run(Thread.java:744)

What am I doing wrong ? How can I apply tuple parameters to a "varagrs" style function?

by tuxdna at October 21, 2014 12:31 AM

clojure - list all permutations of a list

Say I have a set like this:

#{"word1" "word2" "word3"}

How could I list all ways that these words might be ordered, i.e.

word1 word2 word3
word2 word3 word1
word3 word2 word1

etc.

by dagda1 at October 21, 2014 12:25 AM

Overcoming Bias

Thrown’s Kit’s Self-Deception

Back in July 2010 Kerry Howley published a nice New York Times Magazine article on the tensions between my wife and I resulting from my choice to do cryonics. The very next month, August 2010, is the date when, in Howley’s new and already-celebrated book Thrown, her alter-ego Kit first falls in love with MMA fighting:

Not until my ride home, as I began to settle back into my bones and feel the limiting contours of perception close back in like the nursery curtains that stifled the views of my youth, did it occur to me that I had, for the first time in my life, found a way out of this, my own skin. … From that moment onward, the only phenomenological project that could possibly hold interest to me was as follows: capture and describe that particular state of being to which one Sean Huffman had taken me.

I’ve read the book, and also several dozen reviews. Some reviews discuss how Kit is a semi-fictional character, and a few mention Kit’s pretentiousness and arrogance. Some disagree on if Kit has communicated the ecstasy she feels, or if those feelings are worthy of her obsession. But all the reviewers seem to take Kit at her word when she says her primary goal is to understand the ecstasy she felt in that first encounter.

Yet right after the above quote is this:

And so naturally I began to show up places where Sean might show up— the gym where he trained, the bar where he bounced, the rented basement where he lived, the restaurants where he consumed foods perhaps not entirely aligned with the professed goals of an aspiring fighter. I hope it doesn’t sound immodest to say that Sean found this attention entirely agreeable.

Kit does the same to another fighter named Eric, and later she gets despondent when Erik won’t return her calls. She tracks him down to a fight, hugs him in front of the crowd, and is delighted get his acceptance:

My moment of embarrassment had already transformed into a glow of pride. The entire room saw that I was his, and he mine.

While Kit only feels ecstasy during an actual fight, she spends all her time as a “groupie” to two fighters, Sean and Erik. (She says she is a “space-taker”, not “groupie”, but I couldn’t see the difference.) Kit mainly only goes to fights when these men fight, even when such fights are months apart. Kit’s ego comes to depend heavily on getting personal attention from these fighters, and her interest in them rises and falls with their fighting success. The book ends with her latching on to a new fighter, after Sean and Erik have fallen.

It seems to me that if Kit had wanted mainly to study her feeling of ecstasy while watching fights, she would have gone to lots of fights, and tried to break her feelings down into parts, or looked at how they changed with context. She could have also talked to and studied other fighter fans, to break down their feelings or see how those change with context. But Kit instead sought to hang with successful fighters between fights, when neither she nor they felt this ecstasy she said was her focus. She didn’t even talk with fighters much about their ecstasy feelings. What mattered most to Kit apparently was that fighters associated with her, and that they won fights.

Kit quits her philosophy program:

I knew what they would turn my project into, these small scholastics with their ceaseless referencing of better men would, if they even allowed my explorations as a subject of dissertation, demand a dull tome with the tiniest flicker of insight buried underneath 800 pages of exegeses of other men’s work. Instead of being celebrated as a pioneer of modern phenomenology, I would merely be a footnote in the future study of Schopenhauer, whom, without my prodding, no one would study in the future.

It seems to me that Kit is self-deceived. She thinks she wants to study ecstasy, but in fact she is simply star-struck. The “ecstasy” feeling that hit her so hard was her subconscious being very impressed with these fighters, and wanting badly to associate with them. And she felt very good when she succeeded at that. By associating with their superiority, she could also feel feel superior to the rest of the world:

I would write my fighterly thesis, but I would not fraternize with the healthy-minded; better to leave them to their prenatal yoga, their gluten-free diets, their dull if long lives of quietest self-preserving conformism.

Of course Kerry Howley, the author, does not equal Kit, the voice Kerry chooses to narrate her book. Kerry may well be very aware of Kit’s self-deception, but still found Kit a good vehicle for painting an intimate portrait of the lives of some fighters. But if so, I find it odd that none of the other dozens of reviews I’ve read of Thrown mention this.

Added 21Oct: Possible theories:

  1. Most reviewers read the book carefully, but are too stupid to notice.
  2. Most reviewers are lazy & only skimmed the book.
  3. Reviewers hate to give negative reviews, & this sounds negative.
  4. Readers crave idealistic narrators, and reviewers pander to readers.
  5. My reading is all wrong.

by Robin Hanson at October 21, 2014 12:15 AM

Dave Winer

Short podcast about re-connecting with Twitter as a developer. I have to do it, no choice, Also conecting with Facebook, RSS, the web.

October 21, 2014 12:13 AM

DragonFly BSD Digest

led(4) for you and me

Sascha Wildner brought in led(4) from FreeBSD.  It’s a driver for flashing LEDs, as you might have guessed.  I’d like to see someone make Blinkenlights, whether BeBox-style or just generally mysterious.

by Justin Sherrill at October 21, 2014 12:09 AM

HN Daily

Planet Theory

Structural Parameterizations of the Mixed Chinese Postman Problem

Authors: Gregory Gutin, Mark Jones, Magnus Wahlstrom
Download: PDF
Abstract: In the Mixed Chinese Postman Problem (MCPP), given a weighted mixed graph $G$ ($G$ may have both edges and arcs), our aim is to find a minimum weight closed walk traversing each edge and arc at least once. The MCPP parameterized by the number of edges in $G$ or the number of arcs in $G$ is fixed-parameter tractable as proved by van Bevern et al. (in press) and Gutin, Jones and Sheng (ESA 2014), respectively. Solving an open question of van Bevern et al. (in press), we show that unexpectedly the MCPP parameterized by the treewidth of $G$ is W[1]-hard. In fact, we prove that even the MCPP parameterized by the pathwidth of $G$ is W[1]-hard.

October 21, 2014 12:00 AM

Fixed-Points of Social Choice: An Axiomatic Approach to Network Communities

Authors: Christian Borgs, Jennifer Chayes, Adrian Marple, Shang-Hua Teng
Download: PDF
Abstract: We provide the first social choice theory approach to the question of what constitutes a community in a social network. Inspired by the classic preferences models in social choice theory, we start from an abstract social network framework, called preference networks; these consist of a finite set of members where each member has a total-ranking preference of all members in the set.

Within this framework, we develop two complementary approaches to axiomatically study the formation and structures of communities. (1) We apply social choice theory and define communities indirectly by postulating that they are fixed points of a preference aggregation function obeying certain desirable axioms. (2) We directly postulate desirable axioms for communities without reference to preference aggregation, leading to eight natural community axioms.

These approaches allow us to formulate and analyze community rules. We prove a taxonomy theorem that provides a structural characterization of the family of community rules that satisfies all eight axioms. The structure is actually quite beautiful: these community rules form a bounded lattice under the natural intersection and union operations. Our structural theorem is complemented with a complexity result: while identifying a community by the most selective rule of the lattice is in P, deciding if a subset is a community by the most comprehensive rule of the lattice is coNP-complete. Our studies also shed light on the limitations of defining community rules solely based on preference aggregation: any aggregation function satisfying Arrow's IIA axiom, or based on commonly used aggregation schemes like the Borda count or generalizations thereof, lead to communities which violate at least one of our community axioms. Finally, we give a polynomial-time rule consistent with seven axioms and weakly satisfying the eighth axiom.

October 21, 2014 12:00 AM

Gowers Norm, Function Limits, and Parameter Estimation

Authors: Yuichi Yoshida
Download: PDF
Abstract: Let $\{f_i:\mathbb{F}_p^i \to \{0,1\}\}$ be a sequence of functions, where $p$ is a fixed prime and $\mathbb{F}_p$ is the finite field of order $p$. The limit of the sequence can be syntactically defined using the notion of ultralimit. Inspired by the Gowers norm, we introduce a metric over limits of function sequences, and study properties of it. One application of this metric is that it provides a characterization of affine-invariant parameters of functions that are constant-query estimable. Using this characterization, we provide (alternative) proofs of the constant-query testability of several affine-invariant properties, including low-degree polynomials.

October 21, 2014 12:00 AM

October 20, 2014

/r/clojure

What helped you to understand Clojure / Lisps?

I've got lots of experience with various languages (assembly, c, java, js, ruby, python, functional stuff like haskell) but I've never seriously used a Lisp. I've been dabbling with Clojure and while I can do basic stuff, understand simple programs, it hasn't really "clicked" yet.

Has anyone else been here? What examples / codebases helped you to understand what makes Clojure interesting?

submitted by alexheeton
[link] [24 comments]

October 20, 2014 11:58 PM

StackOverflow

(Kestrel) K-combinator: why is it useful?

I have been taking up F# recently (my background is C#) and am reading the site http://fsharpforfunandprofit.com, which I am finding very helpful.

I've got to http://fsharpforfunandprofit.com/posts/defining-functions/ which is the section on combinators. I understand them all (although the Y combinator or Sage bird screws with my mind!) with the exception of the Kestrel. Scott Wlaschin gives the definition (in F#) as:

let K x y = x

I can't understand for the life of me any situation in which this would be useful. At first I thought it might be used as chain operator, so that you can pass a value to a function and then get back the original value. I've written such an operator myself before, but as you can see it's not the same:

let (>|) x f = f x; x

If we partially apply the K combinator (with the value 5) then we get back a function that ignores its argument and instead returns 5. Again, not useful.

(K 5) = fun y -> 5

Can anyone give me an easy example of where this might be used please?

by Richiban at October 20, 2014 11:20 PM

Dave Winer

A podcast response to Marco Arment's piece about Twitter.

October 20, 2014 10:56 PM

CompsciOverflow

Is master theorem applicable in this case?

T(n) = T(n/2) I do not think it applies because there no n term there is no k such that n^k = 0 here is the definition i am using: http://gyazo.com/b9548f57b36372df1e5715d38f578403

by HobiiinER at October 20, 2014 10:54 PM

/r/compsci

StackOverflow

Missing *out* in Clojure with Lein and Ring

I am running Lein 2 and cider 0.7.0. I made a sample ring app that uses ring/run-jetty to start.

(ns nimbus-admin.handler
  (:require [compojure.core :refer :all]
            [compojure.handler :as handler]
            [clojure.tools.nrepl.server :as nrepl-server]
            [cider.nrepl :refer (cider-nrepl-handler)]
            [ring.adapter.jetty :as ring]
            [clojure.tools.trace :refer [trace]]
            [ring.util.response :refer [resource-response response redirect content-type]]
            [compojure.route :as route])
  (:gen-class))


(defroutes app-routes 
  (GET "/blah" req "blah")
  (route/resources "/")
  (route/not-found (trace "not-found" "Not Found")))

(def app (handler/site app-routes))

(defn start-nrepl-server []
  (nrepl-server/start-server :port 7888 :handler cider-nrepl-handler))

(defn start-jetty [ip port]
  (ring/run-jetty app {:port port :ip ip}))

(defn -main
  ([] (-main 8080 "0.0.0.0"))
  ([port ip & args] 
     (let [port (Integer. port)]
       (start-nrepl-server)
       (start-jetty ip port))))

then connect to it with cider like:

cider-connect 127.0.0.1 7888

I can navigate to my site and eval forms in emacs and it will update what is running live in my nrepl session, so that is great.

I cannot see output, either with (print "test") (println "test") (trace "out" 1)

Finally, my project file:

(defproject nimbus-admin "0.1.0"
  :description ""
  :url ""
  :min-lein-version "2.0.0"
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [com.climate/clj-newrelic "0.1.1"]
                 [com.ashafa/clutch "0.4.0-RC1"]
                 [ring "1.3.1"]
                 [clj-time "0.8.0"]
                 [midje "1.6.3"]
                 [org.clojure/tools.nrepl "0.2.5"]
                 [ring/ring-json "0.3.1"]
                 [org.clojure/tools.trace "0.7.8"]
                 [compojure "1.1.9"]
                 [org.clojure/data.json "0.2.5"]
                 [org.clojure/core.async "0.1.346.0-17112a-alpha"]
                 ]
  :plugins [[lein-environ "1.0.0"]
            [cider/cider-nrepl "0.7.0"]]
  :main nimbus-admin.handler)

I start the site with lein run

by Steve at October 20, 2014 10:23 PM

QuantOverflow

Option pricing ? Where to get the dividend yield from?

I'm trying to apply Black & Scholes formula for a real example to price a vanilla equity option but I'm strugling a little bit whith the dividend yield.

Let's assume I have a stock that trades at 50 dollar and the announced dividend in 100 days is 5 dollar, is the dividend yield = (100 / 252 days ) x 5 / 50 = 3.97% ? Am I right ?

The day after would it be (Assuming the stock price didn't change) : (99 / 252 days ) x 5 / 50 = 3.93% ?

Last question please, if the next divident is not announced yet, where do we get the dividend yield from ?

I don't have any problem with applying Black-Scholes formula but I'm just trying to apply it for a real example.

Cheers in advance.

by Plouf at October 20, 2014 10:23 PM

Lobsters

/r/netsec

/r/compsci

A new recursive Minimum Spanning Tree algorithm - requesting help in runtime analysis

Fully recursive variant

  • Assume G=(V,E) is connected and edge weights are distinct.

  • Determine median of E and split accordingly into sets E1 and E2, edges smaller than or equal to the median, and edges larger than the median, respectively.

  • Run any graph traversal algorithm on (V,E1) to find its connected components and whether these components contain any cycles.

  • For each connected component

    • If it is acyclic, add all of its edges to the result.
    • If it has a cycle, recurse on it and add its MST to the result.
  • Contract components into nodes, thus creating a new node set V', and a new edge set E' without the edges from E1 and the intra-component edges from E2.

  • Recurse on (V',E') and add its MST to the result.

  • Return result.

Pseudocode

mst (V, E) { result = {}; (E1, E2) = split(E, median(E)); components = dfs(V, E1); foreach( component in components ){ if( component.acyclic() ){ result += component.edges(); }else{ result += mst(component.nodes(), component.edges()); } } (V', E') = contract(V, E, components); result + = mst(V', E'); return result; } 

Comments

This variant perfectly demonstrates my idea on how to decompose an MST problem into subproblems: Several small MST subproblems in the components, and an overarching MST subproblem between component supernodes.

The number of components in (V, E1) is one more than the number of MST edges in E2:

  • If there is only one component, then it is connected, and obviously all MST edges are in E1 and none are in E2. The contract step will discard all edges in E2.
  • If there are two components, then there are no edges in E1 that connects these two components together, but there is a minimal edge in E2 that does, making it part of the MST. The contract step will discard all other edges from E2.
  • If there are k components, there are k-1 MST edges in E2, the exact number required to connect all the components. The contract step however will not discard all other edges, necessiting the use of recursion.

An interpretation of this algorithm is that it essentially tries to triangulate MST edges in E, subdividing by both weights and spatial structure.

It is very hard to reason about runtime. Operations apart from the recursions can be clearly done in O(m), where m=|E|. If we assume an MST can be done in O(m), recursion on the components also add up to O(m). However I have no idea how to deal with the last MST(V', E') call, I can not reason about limits on the number of edges in E', and thus the total runtime could be anywhere from O(m) to O(m log m)

Addition of one or two Boruvka steps into the algorithm might help runtime by halving or quartering the number of nodes. Not sure whether it helps (reasoning about the) runtime.

An approximate median like the median of medians might suffice for practical implementations while keeping the runtime unchanged, calculation of the exact median in O(m) is actually pretty slow in practice.

Instead of median, it could be possible to use another k-th element, like n, sqrt(m), or something else. It might modify runtime or help runtime analysis.

Implementations might not necessarily involve passing nodes around.


As a modification to the Boruvka's algorithm, replacing the simple cheapest edge selection

  • Assume G=(V,E) is connected and edge weights are distinct.

  • MST:

    • While there are edges

      • Find some MST edges
      • Contract the found edges
  • Find some MST edges:

    • Determine median of E and split accordingly into sets E1 and E2, edges smaller than or equal to the median, and edges larger than the median, respectively.
    • Run any graph traversal algorithm on (V,E1) to find its connected components and whether these components contain any cycle.
    • For each connected component

      • If it is acyclic, add all of its edges to the result
      • If it has a cycle, recurse on it and add its result to the result
    • return result

Pseudocode

MST (V, E) { result = {}; while( |E| > 0 ){ someMstEdges = findSomeMstEdges(V, E); result += someMstEdges; (V, E) = contract(V, E, someMstEdges); } return result; } findSomeMstEdges (V, E) { result = {}; (E1, E2) = split(E, median(E)); components = dfs(V, E1); foreach( component in components ){ if( component.acyclic() ){ result += component.edges(); }else{ result += findSomeMstEdges(component.nodes(), component.edges()); } } return result; } 

Comments

The essence of this version is to remove the second recursion from the fully recursive algorithm to ensure that it runs in O(m). This way we don't find all MST edges, and thus we need to run several iterations.

However, it could be easier to reason about the number of MST edges identified in a single loop:

  • Since the graph and all components we identify are connected, we get at least one MST edge for each actual call of findSomeMstEdges.
  • Both acyclic and cyclic components are helpful in this manner: acyclic components directly yield MST edges, while cyclic components could yield additional components (isolated nodes could be a problem though).

The number of MST edges identified in a single loop, or the number of edges removed/remaining diretly affects the problem size of the next loop.

Again, addition of one or two Boruvka steps might help reasoning about runtime.


As a modification to the Karger-Klein-Tarjan randomized expected linear time algorithm

  • Input: A graph G0 with no isolated vertices

  • 1) If G0 is empty return an empty forest

  • 2) Create a contracted graph G by running two successive Borůvka steps on G0

  • 3) Determine median of G.edges() and split accordingly into sets E1 and E2. Recursively apply the algorithm to (V, E1) to get its minimum spanning forest F.

  • 4) Discard all edges from E2 that point to nodes within the same component of F. (Trivial case of "Remove all F-heavy edges from G (where F is the forest from step 3) using a linear time minimum spanning tree verification algorithm.")

  • 5) Recursively apply the algorithm to G to get its minimum spanning forest.

  • Output: The minimum spanning forest of G and the contracted edges from the Borůvka steps

Comments

We basically replace the random sampling step with a deterministic median-split step. This results in a very similar, if not equivalent algorithm as the fully recursive variant, with different interpretation.

In the Karger-Klein-Tarjan algorithm, they use the Random Sampling Lemma to give an expected number of the edges passed to the last recursion.

We too seek a limit on this number. However I am not sure whether the Random Sampling Lemma or an analogue applies to this modified algorithm.

I don't see why would it suddenly lose its expected linear time, since random coin flips can construct the same partition of E into E1 and E2, but it might happen that it actually manages to construct a worse/worst case scenario. However I have not found any proof whether it remains expected linear time, becomes linear time, or actually becomes worse.

submitted by FrigoCoder
[link] [1 comment]

October 20, 2014 10:13 PM

UnixOverflow

How I can run PostgreSQL on FreeBSD?

When I run psql it gives me the following error

psql: could not connect to server: No such file or directory
        Is the server running locally and accepting
        connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

Is there any solution?

by ilhan at October 20, 2014 10:10 PM

StackOverflow

ansible: pass variable to a handler

I use an "eye" as a supervisor and on changes in templates have to runs something like this:

eye load service.rb
eye restart service.rb

I want to define this as a single handler for all the apps and call it like

eye reload appname

And in a handler operate like this:

- name: reload eye service
command: eye load /path/{{ service }}.rb && eye restart {{ service }}

But I can't find a way to pass variable to a handler. Is it possible?

by user3706657 at October 20, 2014 10:09 PM

Fefe

In England gab es die erste Verurteilung wegen Sex-Mangas ...

In England gab es die erste Verurteilung wegen Sex-Mangas mit Kindern.
A 39-year-old UK man has been convicted of possessing illegal cartoon drawings of young girls exposing themselves in school uniforms and engaging in sex acts.
Es fällt nicht leicht, in dem Fall Opfer oder Geschädigte zu benennen. Ich für meinen Teil ziehe es deutlich vor, wenn ein Pädophiler seine Triebe an Mangas abreagieren kann, als wenn echte Kinder zu Opfern werden.

Die Strafe war Neun Monate auf Bewährung, der Mann bleibt also erstmal auf freiem Fuß.

October 20, 2014 10:01 PM

Portland Pattern Repository

StackOverflow

Akka cluster-sharding: Can Entry actors have dynamic props

Akka Cluster-Sharding looks like it matches well with a use case I have to create single instances of stateful persistent actors across Akka nodes.

I'm not clear if it is possible though to have an Entry actor type that requires arguments to construct it. Or maybe I need to reconsider how the Entry actor gets this information.

Object Account {
  def apply(region: String, accountId: String): Props = Props(new Account(region, accountId))
}

class Account(val region: String, val accountId: String) extends Actor with PersistentActor { ... }

Whereas the ClusterSharding.start takes in a single Props instance for creating all Entry actors.

From akka cluster-sharding:

val counterRegion: ActorRef = ClusterSharding(system).start(
  typeName = "Counter",
  entryProps = Some(Props[Counter]),
  idExtractor = idExtractor,
  shardResolver = shardResolver)

And then it resolves the Entry actor that receives the message based on how you define the idExtractor. From the source code for shard it can be seen it uses the id as the name for a given Entry actor instance:

def getEntry(id: EntryId): ActorRef = {
val name = URLEncoder.encode(id, "utf-8")
context.child(name).getOrElse {
  log.debug("Starting entry [{}] in shard [{}]", id, shardId)

  val a = context.watch(context.actorOf(entryProps, name))
  idByRef = idByRef.updated(a, id)
  refById = refById.updated(id, a)
  state = state.copy(state.entries + id)
  a
}

}

It seems I should instead have my Entry actor figure out its region and accountId by the name it is given, although this does feel a bit hacky now that I'll be parsing it out of a string instead of directly getting the values. Is this my best option?

by Rich at October 20, 2014 09:47 PM

CompsciOverflow

Rigorous Proof of Insertion Sort

Currently I self study CLRS book (Outside of any course, so I got no access to an instructor)

And I am stuck proving Insertion Sort, The proof in CLRS book is not so formal.

Here's the algorithm:

INSERTION-SORT(A)
   for j=2 to A.length (= n)
      key = A[j]
      i = j-1
      while (0<i and key<A[i])
         A[i+1]=A[i]
         i = i-1
      end while
      A[i+1]=key
   end for
end procedure

I tried to formalize the proof with the following pre-post conditions:

Pre-Condition: $A=A_{org}$ and $j=2$ (I.e. $A_{org}$ holds the original values of $A$)
Post-Condition: The array $A$ consists of the same elements as in $A_{org}$ but in a sorted order that is $\forall i_1,i_2\in\{1..n\}, i_1<i_2\to A[i_1]\leq A[i_2]$.

My loop invariant is:
($p$ denotes the $p$'s iteration)

$I(p)="\text{The array $A[1..j-1]$ consists of the same elements as in $A_{org}[1..j-1]$} \land \forall i_1,i_2\in\{1..j-1\}, i_1<i_2\to A[i_1]\leq A[i_2] \land j=2+p"$

Now when I try to prove the inductive step I got stuck and cannot proceed because of the nested $while$ loop and because of the informal sentence "The array $A[1..j-1]$ consists of the same elements as in $A_{org}[1..j-1]$".

Any help on how to prove Insertion Sort rigorously and how to formalize the sentence "The array $A[1..j-1]$ consists of the same elements as in $A_{org}[1..j-1]$" will be appreciated (I want my loop invariant to contain only mathematical symbols and not informal english phrases).

(BTW: I am trying to write the proof in the same style as in Susanna. Epp's Discrete Mathematics book)

Any help will be appreciated. Thanks.

by Saita at October 20, 2014 09:43 PM

QuantOverflow

Robust Returns-Based Style Analysis

Sharpe's Return-Based Style Analysis is an interesting theory but flawed in practice when working with long-short funds or funds that are changing strategies over shorter periods of time due to the limits of linear regression.

I have found a few papers looking into improvements to make the calculations more robust Markov, Muchnik, Krasotkina, Mottl (2006) seems fairly reasonable for instance. However, they commonly only deal with the time-varying beta issue.

I was wondering if there was anyone out there doing work on the limitations of linear regression for style analysis. I particular more robust variance-covariance matrices for the minimization of the objective function.

by rhaskett at October 20, 2014 09:42 PM

/r/compilers

CompsciOverflow

Unique boolean functions with one input

I have an assignment where I have to

write a truth table for all possible unique Boolean functions with one input

, but I do not understand exactly what I have to do.

By unique the teacher says that the functions, for the same input, must have the different outputs...

I have thought about writing down the truth table for a NOT, since it has just one input, but I am not sure.

by Broly at October 20, 2014 09:28 PM

/r/emacs

Package.el didn't prune my unused packages, so I wrote the code myself (warning: potentially badly)

The post this post is about: Making package.el behave like Vundle

I'm a long time Vim user who loved the fact that Vundle kept all of my dependencies perfectly in sync across multiple machines. Moving to Emacs appears to have gone well other than the fact that I could never get my package management as smooth as I had it with Vim.

I've written a bunch of functions to prune packages that I don't mention in a dependency list, taking dependencies of dependencies into account. The post I linked above contains a bit more context but here's the functions in question anyway.

;; Package pruning tools. (defun flatten (mylist) "Flatten MYLIST, taken from http://rosettacode.org/wiki/Flatten_a_list#Emacs_Lisp for sanity." (cond ((null mylist) nil) ((atom mylist) (list mylist)) (t (append (flatten (car mylist)) (flatten (cdr mylist)))))) (defun filter (predicate subject) "Use PREDICATE to filter SUBJECT and return the result." (delq nil (mapcar (lambda (x) (and (funcall predicate x) x)) subject))) (defun get-package-name (package) "Fetch the symbol name of a PACKAGE." (car package)) (defun get-package-version (package) "Return the version string for PACKAGE." (package-version-join (aref (cdr package) 0))) (defun get-package-dependencies (package) "Fetch the symbol list of PACKAGE dependencies." (mapcar 'car (elt (cdr package) 1))) (defun get-packages-dependency-tree (packages) "Recursively fetch all dependencies for PACKAGES and return a tree of lists." (mapcar (lambda (package) (list (get-package-name package) (get-packages-dependency-tree (get-package-dependencies package)))) (get-packages-as-alist packages))) (defun get-packages-as-alist (packages) "Return the list of PACKAGES symbols as an alist, containing version and dependency information." (filter (lambda (n) (car (member (car n) packages))) package-alist)) (defun get-all-current-dependencies (packages) "Return all packages found in PACKAGES with their dependencies recursively." (delq nil (delete-dups (flatten (get-packages-dependency-tree packages))))) (defun get-all-obsolete-packages (packages) "Return all packages in an alist which are not contained in PACKAGES." (filter (lambda (n) (not (member (car n) (get-all-current-dependencies packages)))) package-alist)) (defun prune-installed-packages (packages) "Delete all packages not listed or depended on by anything in PACKAGES." (mapc (lambda (n) (package-delete (symbol-name (get-package-name n)) (get-package-version n))) (get-all-obsolete-packages packages))) 

I have a plain list of package names that I pass to prune-installed-packages as part of my main synchronisation function. All of this is detailed in the blog post.

I hope you find this useful!

submitted by Wolfy87
[link] [10 comments]

October 20, 2014 09:21 PM

StackOverflow

What type of webapp is the sweet spot for Scala's Lift framework?

What kind of applications are the sweet spot for Scala's lift web framework.

My requirements:

  1. Ease of development and maintainability
  2. Ready for production purposes. i.e. good active online community, regular patches and updates for security and performance fixes etc.
  3. Framework should survive a few years. I don't want to write a app in a framework for which no updates/patches are available after 1 year.
  4. Has good UI templating engines
  5. Interoperation with Java (Scala satisfies this arleady. Just mentioning here for completeness sake)
  6. Good component oriented development.
  7. Time required to develop should be proportion to the complexity of web application.
  8. Should not be totally configuration based. I hate it when code gets automatically generated for me and does all sorts of magic under the hood. That is a debugging nightmare.
  9. Amount of Lift knowledge required to develop a webapp should be proportional to the complexity of the web application. i.e I should't have to spend 10+ hours learning Lift just to develop a simple TODO application. (I have knowledge of Databases, Scala)

Does Lift satisfy these requirements?

by user855 at October 20, 2014 09:20 PM

How to remove null cases from a List(List(List())) in scala?

I want only valid values from a List(List(List())), for example,

List(List(List(())), List(List(())), List(List(())), List(List(())), List(List(())), List(List(book eraser -> pen , confidence :66.0)))
List(List(List(())), List(List(Ink -> pen eraser , confidence :100.0)), List(List(())), List(List(pen Ink -> eraser , confidence :100.0)), List(List(())), List(List(Ink eraser -> pen , confidence :100.0)))

I need only the inside strings,

book eraser -> pen , confidence :66.0
Ink -> pen eraser , confidence :100.0
pen Ink -> eraser , confidence :100.0
Ink eraser -> pen , confidence :100.0

by rosy at October 20, 2014 08:58 PM

/r/osdev

Announcing swieros - A tiny hand crafted CPU emulator, C compiler, and OS

Hello all!

The project:

A tiny and fast Unix-ish kernel (based on xv6), compiler, and userland for fun, education, and research.

Virtual CPU versus messy real hardware, fast enough to support self-emulation.

Hand-crafted C subset compiler allowing on-the-fly compilation.

Refactored subsets of well known standards versus reinventing previously "solved" problems.

Embedded philosophy with no usernames or permissions.

Encourage prototyping of "blue sky" ideas throughout the entire architecture/software stack.

Runs under Windows or Linux.

See the readme for more details and a full tutorial walk-through.

https://github.com/rswier/swieros

submitted by rswier
[link] [2 comments]

October 20, 2014 08:48 PM

StackOverflow

Apache Spark: what happens when one uses a host object value within a worker that has not been broadcasted?

Imagine a simple program like this:

def main(args: String[]):
 val hostLocalValue = args(0).toInt
 val someRdd = getSomeIntRdd
 val mySum = someRdd
       .map(x => if (x < 0) 1 else hostLocalValue)
       .reduce(_ + _)
 print(mySum)

The map function which is executed at remote worker uses host-local value without that being broadcasted. How does this work? If THAT works all the time, then what do we need broadcast() for?

by Ilya Smagin at October 20, 2014 08:43 PM

QuantOverflow

Different ways of portfolio optimization

There are different ways to optimize portfolios:

$$ \max R^Tw\tag{1}$$

or

$$ \min w^T \Sigma w\tag{2}$$

and finally using a risk tolerance $\lambda$:

$$ \min{(w^T\Sigma w-\lambda R^T w)}\tag{3}$$

Suppose we have constraints: $\sum w_i = 1$, $w_i\ge 0$ for all the optimization problems. Additionally we can define for the optimization problem $(1)$ and $(2)$ further constraints. For $(1)$: $w^T\Sigma w\le \sigma$, i.e. the risk should not exceed a certain level $\sigma$. The same is possible for $(2)$ with return, adding the constraint: $R^T w\ge r$, for a minimal target return $r$. My question is, in the optimization problem $(3)$ does it make sense to add a constraint like $w^T \Sigma w \le \sigma$ or $R^Tw \ge r$? Am I right, adding such a constraint we would discard the solution (a efficient frontier portfolio) which does not satisfy this constraint.

by user8 at October 20, 2014 08:30 PM

StackOverflow

Scala generator using delimited continuation

I found that the pattern provided in https://gist.github.com/arnolddevos/574873 is very appealing for implementing lazy sequences like the "yield" keyword in C# / Python. I would like to write a post-order traversal function for a binary tree in Scala:

def postOrderTraverse: Generator[BinaryTreeNode[T]] = generator(YIELD => {
  val stack = new Stack[BinaryTreeNode[T]]
  var last: BinaryTreeNode[T] = null

  stack.push(this)
  while (stack.size != 0) {
    val curr = stack.top
    if ((curr.left != null || curr.right != null) && last != curr.left) {
      if (curr.left != null)
        stack.push(curr.left)
      if (curr.right != null)
        stack.push(curr.right)
    }
    else {
      YIELD(curr)
      last = curr
      stack.pop()
    }
  }
})

This code does not compile because the if {...} else {...} is not of type Any@suspendable. How can I make this work?

by Tongfei Chen at October 20, 2014 08:26 PM

CompsciOverflow

primitive recursive( course of values recursion)

If $f(n)$ is any function, we write $f(0)=1,f(n)=[f(0),f(1),...f(n-1)]$ if $n\ne 0$ and let $f(n)=g(n,f(n))$ for all $n$. Show that if $g$ is recursive so is $f$. I don't want anybody solve this problem to me.I just wanna hear your guidance about it.I'll appreciate it.

by fred at October 20, 2014 08:24 PM

Can git be used to share files between multiple computers not on same lan? [on hold]

a friend and I have been working on a project and need a way to share files between computers easily. While there are other alternatives such as dropbox I'd prefer to use something else. Would git be able to be set up to share files between 2 computers that are not connected to the same lan, such as uploading edited files and sending them to a server/syncing the updated files to the other computer?

by Jack at October 20, 2014 08:23 PM

StackOverflow

Play Framework Ning WS API encoding issue with HTML pages

I'm using Play Framework 2.3 and the WS API to download and parse HTML pages. For none-English pages (e.g Russian, Hebrew), I often get wrong encoding.

Here's an example:

def test = Action.async { request =>

    WS.url("http://news.walla.co.il/item/2793388").get.map { response =>
        Ok(response.body)
    }
}

This returns the web page's HTML. English characters are received ok. The Hebrew letters appear as Gibberish. (Not just when rendering, at the internal String level). Like so:

<title>29 ×ר×××× ××פ××ת ×ש×××× ×× ×¤××, ××× ×©×××©× ×שר×××× - ×××××! ××ש×ת</title>

Other articles from the same web-site can appear ok.

using cURL with the same web-page returns perfectly fine which makes me believe the problem is within the WS API.

Any ideas?

Edit:

I found a solution in this SO question.

Parsing the response as ISO-8859-1 and then converting it to UTF-8 like-so:

Ok(new String(response.body.getBytes("ISO-8859-1") , response.header(CONTENT_ENCODING).getOrElse("UTF-8")))

display correctly. So I have a working solution, but why isn't this done internally?

by Lior at October 20, 2014 08:11 PM

separate two lists by difference in elements in them

If I have

val incomingIds : List[Int] = ....
val existingIds : List[Int] = //this makes db calls and find existing records (only interested in returning ids)

Now next I want to compare incomingIds with existingIds in a following way

say I have

val incomingIds : List[Int] = List(2,3,4,5)
val existingIds : List[Int] = List(1,2,3,6)

What above sample suggests is that my API should be able to find ids that are subject for deletion (ones that exist in incomingIds but not in existingIds). In this sample existingIds have 1,4,5 but they aren't there in incomingIds means 1,4,5 should go into

val idsForDeletion :List[Int]

and there will be another list call it

val idsForInsertion :List[Int]. 

so 6 should go into idsForInsertion list.

Is there a simple way to partition lists such a way?

by user2066049 at October 20, 2014 08:02 PM

CompsciOverflow

How to calculate Width of Physical Address?

How do you calculate Size/Width of Physical Address?

It is given that:

  • Width of Virtual Machine address is 64 bits
  • Size of Page is 32K
  • Size of PTE (page table entry) is 8 bytes
  • Bits of physical frame number in PTE is 40

From reading here and here:

Size of Physical Address = Size of Page x Number of Pages.

Size of Page = 32k = $2^5$ x $2^{10}$ = $2^{15}$

Total Logical Size = Total Virtual Size = $2^{x}$, where $x$ is the number of address bits. Since the Width of Virtual Machine address is 64 bits, then the size is $2^{64}$

Number of Pages = $\frac{Total Logical Size}{Size of Page} = \frac{2^{64}}{2^{15}} = 2^{49}$

Size of Physical Address = Size of Page x Number of Pages = $2^{15}$ x $2^{49}$ = $2^{64}$

Thus the Width of Physical Address is 64 bits!

Are my calculations/assumptions correct?

by lucidgold at October 20, 2014 07:56 PM

/r/netsec

Lobsters

New gem pumog

I uploaded a new version of <a href=“https://rubygems.org/gems/pumog”>pumog 1.0.1</a>. Pumog stands for <b>PuppetModuleGenerator</b> and can be used to generate a basic structure for new puppet modules with or without documentation.

Comments

by wikimatze at October 20, 2014 07:50 PM

TheoryOverflow

Complexity lower bound of finding the factorial of a number

I was wondering about the complexity of the factorial of a number mostly because this problem is not referenced in the complexity books I have read.

Two similar problems, Matrix Multiplication and Factorization are in, almost, all discussions about $\mathrm{P}$ and $\mathrm{NP}$. The complexity of the first is a major research field, see here, and we are trying to put Factorization inside $\mathrm{P}$.

But for the, seemingly, close problem of Factorial nothing is being said. Almost no result at all. The only one I could find is the one mentioned here and the wiki article from the far away 1983.

Why there is no interest in it? Can it have any implications in Complexity Theory like Factoring? Can Factorial be in $\mathrm{P}$ ?

Lastly one thought, Factorial must be in $\mathrm{NP}$ (or better in $\mathrm{FNP}$?) the certificate would be the actual number? that is $n!$ ?

by Harry at October 20, 2014 07:45 PM

/r/emacs

/r/netsec

StackOverflow

Scala difference of two lists

I have two lists:

val list1 = List("word1","word2","word2","word3","word1")
val list2 = List("word1","word4")

I want to remove all occurrences of list2 elements from list1, i.e. I want

List("word2","word2","word3") <= list1 *minus* list2

I did list1 diff list2 which gives me List("word2","word2","word3","word1") which is removing only the first occurrence of "word1".

I can not convert it to sets because I need knowledge about duplicates (see "word2" above). What to do?

by Pavan K Mutt at October 20, 2014 07:34 PM

Jeff Atwood

Your Community Door

What are the real world consequences to signing up for a Twitter or Facebook account through Tor and spewing hate toward other human beings?

As far as I can tell, nothing. There are barely any online consequences, even if the content is reported.

But there should be.

The problem is that Twitter and Facebook aim to be discussion platforms for "everyone", where every person, no matter how hateful and crazy they may be, gets a turn on the microphone. They get to be heard.

The hover text for this one is so good it deserves escalation:

I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express.

If the discussion platform you're using aims to be a public platform for the whole world, there are some pretty terrible things people can do and say to other people there with no real consequences, under the noble banner of free speech.

It can be challenging.

How do we show people like this the door? You can block, you can hide, you can mute. But what you can't do is show them the door, because it's not your house. It's Facebook's house. It's their door, and the rules say the whole world has to be accommodated within the Facebook community. So mute and block and so forth are the only options available. But they are anemic, barely workable options.

As we build Discourse, I've discovered that I am deeply opposed to mute and block functions. I think that's because the whole concept of Discourse is that it is your house. And mute and ignore, while arguably unavoidable for large worldwide communities, are actively dangerous for smaller communities. Here's why.

  • It allows you to ignore bad behavior. If someone is hateful or harassing, why complain? Just mute. No more problem. Except everyone else still gets to see a person being hateful or harassing to another human being in public. Which means you are now sending a message to all other readers that this is behavior that is OK and accepted in your house.

  • It puts the burden on the user. A kind of victim blaming — if someone is rude to you, then "why didn't you just mute / block them?" The solution is right there in front of you, why didn't you learn to use the software right? Why don't you take some responsibility and take action to stop the person abusing you? Every single time it happens, over and over again?

  • It does not address the problematic behavior. A mute is invisible to everyone. So the person who is getting muted by 10 other users is getting zero feedback that their behavior is causing problems. It's also giving zero feedback to moderators that this person should probably get an intervention at the very least, if not outright suspended. It's so bad that people are building their own crowdsourced block lists for Twitter.

  • It causes discussions to break down. Fine, you mute someone, so you "never" see that person's posts. But then another user you like quotes the muted user in their post, or references their @name, or replies to their post. Do you then suppress just the quoted section? Suppress the @name? Suppress all replies to their posts, too? This leaves big holes in the conversation and presents many hairy technical challenges. Given enough personal mutes and blocks and ignores, all conversation becomes a weird patchwork of partially visible statements.

  • This is your house and your rules. This isn't Twitter or Facebook or some other giant public website with an expectation that "everyone" will be welcome. This is your house, with your rules, and your community. If someone can't behave themselves to the point that they are consistently rude and obnoxious and unkind to others, you don't ask the other people in the house to please ignore it – you ask them to leave your house. Engendering some weird expectation of "everyone is allowed here" sends the wrong message. Otherwise your house no longer belongs to you, and that's a very bad place to be.

I worry that people are learning the wrong lessons from the way Twitter and Facebook poorly handle these situations. Their hands are tied because they aspire to be these global communities where free speech trumps basic human decency and empathy.

The greatest power of online discussion communities, in my experience, is that they don't aspire to be global. You set up a clubhouse with reasonable rules your community agrees upon, and anyone who can't abide by those rules needs to be gently shown the door.

Don't pull this wishy washy non-committal stuff that Twitter and Facebook do. Community rules are only meaningful if they are actively enforced. You need to be willing to say this to people, at times:

No, your behavior is not acceptable in our community; "free speech" doesn't mean we are obliged to host your content, or listen to you being a jerk to people. This is our house, and our rules.

If they don't like it, fortunately there's a whole Internet of other communities out there. They can go try a different house. Or build their own.

The goal isn't to slam the door in people's faces – visitors should always be greeted in good faith, with a hearty smile – but simply to acknowledge that in those rare but inevitable cases where good faith breaks down, a well-oiled front door will save your community.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!

by Jeff Atwood at October 20, 2014 07:32 PM

Dave Winer

TheoryOverflow

How prevalent are traffic control algorithms?

Can anyone point me to some algorithms that specialize in traffic control and prevention? I am always wondering if traffic lights optimize for specific conditions.

by Kian Sheik at October 20, 2014 07:19 PM

StackOverflow

How Do I Configure Buildr to run ScalaTest 2.11?

I'm using Buildr 1.4.20 (with Ruby 2.0.0 on 64-bit Linux) and trying to use ScalaTest with Scala 2.11.2, but I'm getting the following ClassNotFoundException every time I try to run buildr test.

Running tests in MyProject
ScalaTest "MySimpleTest"
An exception or error caused a run to abort. This may have been caused by a problematic custom reporter.
java.lang.NoClassDefFoundError: scala/xml/MetaData
    at org.scalatest.tools.ReporterFactory.createJunitXmlReporter(ReporterFactory.scala:209)
    at org.scalatest.tools.ReporterFactory.getReporterFromConfiguration(ReporterFactory.scala:230)
    at org.scalatest.tools.ReporterFactory$$anonfun$createReportersFromConfigurations$1.apply(ReporterFactory.scala:242)
    at org.scalatest.tools.ReporterFactory$$anonfun$createReportersFromConfigurations$1.apply(ReporterFactory.scala:241)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
    at scala.collection.Iterator$class.foreach(Iterator.scala:743)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1177)
    at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
    at org.scalatest.tools.ReporterConfigurations.foreach(ReporterConfiguration.scala:43)
    at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
    at org.scalatest.tools.ReporterConfigurations.map(ReporterConfiguration.scala:43)
    at org.scalatest.tools.ReporterFactory.createReportersFromConfigurations(ReporterFactory.scala:241)
    at org.scalatest.tools.ReporterFactory.getDispatchReporter(ReporterFactory.scala:245)
    at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2720)
    at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
    at org.scalatest.tools.Runner$.run(Runner.scala:883)
    at org.scalatest.tools.ScalaTestAntTask.execute(ScalaTestAntTask.scala:329)
    at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
Caused by: java.lang.ClassNotFoundException: scala.xml.MetaData
    at org.apache.tools.ant.AntClassLoader.findClassInComponents(AntClassLoader.java:1365)
    at org.apache.tools.ant.AntClassLoader.findClass(AntClassLoader.java:1315)
    at org.apache.tools.ant.AntClassLoader.loadClass(AntClassLoader.java:1068)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 19 more

Naturally, I thought I could fix this by adding a dependency with scala.xml.MetaData in it, so I added "org.scala-lang.modules:scala-xml_2.11:jar:1.0.2" to my test classpath, but I still get the same error.

I'm sure the class is indeed present in the .jar file:

atg@host:~> zipinfo ~/.m2/repository/org/scala-lang/modules/scala-xml_2.11/1.0.2/scala-xml_2.11-1.0.2.jar | grep MetaData
-rw----     2.0 fat     1441 bl defN 14-May-20 10:09 scala/xml/MetaData$$anonfun$asAttrMap$1.class
-rw----     2.0 fat     1312 bl defN 14-May-20 10:09 scala/xml/MetaData$$anonfun$toString$1.class
-rw----     2.0 fat     1215 bl defN 14-May-20 10:09 scala/xml/MetaData$$anonfun$toString1$1.class
-rw----     2.0 fat     4197 bl defN 14-May-20 10:09 scala/xml/MetaData$.class
-rw----     2.0 fat    10489 bl defN 14-May-20 10:09 scala/xml/MetaData.class

... so I can only assume that test.with isn't the right way to add this dependency in a Scala project. Can anyone please offer any advice on how to fix this?

My entire buildfile is as follows:

# enable Scala 2.11.2
Buildr.settings.build['scala.version'] = "2.11.2"
Buildr.settings.build['scala.test'] = 'org.scalatest:scalatest_2.11:jar:2.2.2'
Buildr.settings.build['scala.check'] = 'org.scalacheck:scalacheck_2.11:jar:1.11.6'
require 'buildr/scala'

VERSION_NUMBER = "1.0-SNAPSHOT"
GROUP = "..."
COPYRIGHT = "..."

repositories.remote << "http://repo1.maven.org/maven2"

DEPS_COMPILE = "javax.servlet:javax.servlet-api:jar:3.1.0"

desc "..."
define "MyProject" do
   project.version = VERSION_NUMBER
   project.group = GROUP
   manifest["Implementation-Vendor"] = COPYRIGHT

   compile.with DEPS_COMPILE

   test.with "org.scala-lang.modules:scala-xml_2.11:jar:1.0.2"
end

by ATG at October 20, 2014 07:08 PM

/r/clojure

Deftype Equality Definition

If I define a custom type with defense, for example:

(deftype Pair [key value])

How can I prevent duplicate items from making it into a set? I've tried to override the equals method in Object, but I can't seem to prevent it.

What is the idiomatic way in clojure to define object equivalence in a custom type?

submitted by Cthulukin
[link] [9 comments]

October 20, 2014 07:01 PM

Portland Pattern Repository

/r/compsci

StackOverflow

Akka: testing monitoring\death watch

In my scenario I have 2 actors:

  1. watchee (I use TestProbe)
  2. watcher (Watcher wrapped into TestActorRef to expose some internal state I track in my test)

Watcher should take some actions when watchee dies.

Here is the complete test case I've written so far:

class TempTest(_system: ActorSystem) extends TestKit(_system) with ImplicitSender with FunSuiteLike with Matchers with BeforeAndAfterAll {

  def this() = this(ActorSystem("TempTest"))

  override def afterAll {
    TestKit.shutdownActorSystem(system)
  }

  class WatcherActor(watchee: ActorRef) extends Actor {

    var state = "initial"
    context.watch(watchee)

    override def receive: Receive = {
      case "start" =>
        state = "start"
      case _: Terminated =>
        state = "terminated"
    }

  }

  test("example") {
    val watchee = TestProbe()
    val watcher = TestActorRef[WatcherActor](Props(new WatcherActor(watchee.ref)))

    assert(watcher.underlyingActor.state === "initial")

    watcher ! "start" // "start" will be sent and handled by watcher synchronously
    assert(watcher.underlyingActor.state === "start")

    system.stop(watchee.ref) // will cause Terminated to be sent and handled asynchronously by watcher
    Thread.sleep(100) // what is the best way to avoid blocking here?
    assert(watcher.underlyingActor.state === "terminated")
  }

}

Now, since all involved actors use CallingThreadDispatcher (all Akka's test helpers gets constructed using props with .withDispatcher(CallingThreadDispatcher.Id)) I can safely assume that when this statement returns:

watcher ! "start"

... the "start" message is already processed by WatchingActor and thus I can make assertions based in the watcher.underlyingActor.state

However, based on my observations, when I stop watchee using system.stop or by sending Kill to it the Terminated message produced as a side effect of watchee death gets executed asynchronously, in another thread.

Not-a-solution is to stop watchee, block thread for some time and verify Watcher state after that, but I'd like to know how to I do this the right way (i.e. how to be sure that after killing actor it's watcher received and processed Terminated message signaling it's death)?

by Eugeny Loy at October 20, 2014 06:51 PM

/r/dependent_types

StackOverflow

Trouble with Ordered Trait in Scala

I am attempting to define a natural ordering for distinct though similar classes of object. In Java I would use Comparable, and it seems the way to do the equivalent in Scala is with Ordered. I have the following trait:

trait Positioned extends Ordered[Positioned] {
  def position: Int = 1

  override def compare(that: Positioned): Int = position - that.position
}

I want to apply this trait to multiple case classes like this one:

case class Image(id: String,
                 override val position: Int = 1) extends Positioned

This complies just fine but at runtime when I call sorted on a collection of these Image objects, I get this error:

diverging implicit expansion for type scala.math.Ordering[com.myapp.Image]
starting with method $conforms in object Predef

Please let me know what this means and what I can do to fix it.

by Vidya at October 20, 2014 06:21 PM

Passing function as argument in Scala/Figaro

Im trying to learn Figaro and since it is implemented in Scala I run into some Scala specific issues. For example in the code below the Importance.probability takes two arguments, the first one is the distribution and the second one is a predicate. But when I try to run this code I get the following error:

Missing argument for greaterThan50

which makes sense since it actually takes one argument.

Since Scala is a functional language I guess there is some smart standard way of sending functions as arguments that I have missed? I have tried to use _ to make it partially applied but that is not working.

import com.cra.figaro.library.atomic.continuous.Uniform
import com.cra.figaro.algorithm.sampling.Importance

def greaterThan50(d: Double) = d > 50 
val temperatur = Uniform(10,70) 
Importance.probability(temperatur, greaterThan50)

by user3139545 at October 20, 2014 06:06 PM

Portland Pattern Repository

StackOverflow

create an arbitrary intance of "type"

I have the following,

type Pos = (Int, Int) 

I want to generate random values of this type with some restrictions (both has to be 0-8)

I would like to do something like

instance Arbitrary Pos where
  arbitrary = do x <- choose(0,8) 
                 y <- choose(0,8)
                 return (x,y) 

and then use it in my test to have valid positions.

This won't work bc I'm aliasing(?) tuples

other methods I have tried are to use implications in my test to say

prop_my_prop (x,y) = abs x < 9  && abs y < 9 ==> ...

but I think that's pretty ugly and in theory it might exhaust the quickchecktest (run over 1000 times).

this is an assignment so I just want some indication were to look or how to approach this, I'm not allowed to change Pos.

by skyw00lker at October 20, 2014 05:57 PM

Lobsters

StackOverflow

Scala Immutable Set is Mutable when declaring as a var

I'm in the process of reading Programming in Scala, 2nd Edition (fantastic book, much better than the scala's website for explaining things in a non-rockety-sciencey manner) and I noticed this...oddity when going over Immutable and Mutable Sets.

It declares the following as an immutable set

var jetSet=Set("Boeing", "Airbus")
jetSet+="Lear"
println(jetSet.contains("Cessna"))

And then states that only Mutable sets define the += method. Ok that makes perfect sense. The problem is that this code works. And the type of set created when tested in the REPL is in fact the immutable set, but it has the += method defined on it and it functions perfectly fine. Behold

scala> var a = Set("Adam", "Bill")
a: scala.collection.immutable.Set[String] = Set(Adam, Bill)

scala> a += "Colleen"

scala> println(a)
Set(Adam, Bill, Colleen)

scala> a.getClass
res8: Class[_ <: scala.collection.immutable.Set[String]] = class scala.collection.immutable.Set$Set3

But if I declare the Set to be a val, the Immutable Set created does not have the += method defined

scala> val b = Set("Adam", "Bill")
b: scala.collection.immutable.Set[String] = Set(Adam, Bill)

scala> b += "Colleen"
<console>:9: error: value += is not a member of scala.collection.immutable.Set[String]
          b += "Colleen"

What is going on here? They both are stated to be an immutable Set but the one declared a var has access to the += method and can use it.

Also when I kept calling the getClass method on the var Immutable Set I noticed something strange....

scala> a.getClass
res10: Class[_ <: scala.collection.immutable.Set[String]] = class scala.collection.immutable.Set$Set3

scala> a += "One"

scala> a.getClass
res12: Class[_ <: scala.collection.immutable.Set[String]] = class scala.collection.immutable.Set$Set4

scala> a += "Two"

scala> a.getClass
res14: Class[_ <: scala.collection.immutable.Set[String]] = class scala.collection.immutable.HashSet$HashTrieSet

scala> a += "Tree"

scala> a.getClass
res16: Class[_ <: scala.collection.immutable.Set[String]] = class scala.collection.immutable.HashSet$HashTrieSet

scala> a
res17: scala.collection.immutable.Set[String] = Set(One, Tree, Bill, Adam, Two, Colleen)

My guess is that thanks to some hidden syntactic sugar, Scala recognizes that it's a Var and allows you to replace it with a newly constructed set anyway.

by Adam Ritter at October 20, 2014 05:43 PM

TheoryOverflow

need help with java, please anwser :) I have been using bluej [on hold]

Implement a Book class for a book store as described: A Book has a title, cost, and number in stock. Set the title and cost to values passed to the constructor. Create a “get method” for each instance variable. Create a method that will increase the number in stock by an amount provided. Finally, create a method that will return the total value of the given book in stock (do not worry about formatting this output to a normal currency value). You do not need to provide any comments in your code. Here is sample code and output from a Driver for your Book class:

Book book1 = new Book(“Extreme Alpinism”, 19.95); book1.increaseStock(5); book1.increaseStock(2); System.out.println(book1.getNumInStock() + “ copies of “ + book1.getName() + “ in stock.”); System.out.println(“The value of “ + book1.getName() + “ in stock is $” + book1.calcStockValue());

Output: 7 copies of Extreme Alpinism in stock. The value of Extreme Alpinism in stock is $139.65

by John wekler at October 20, 2014 05:39 PM

Dave Winer

Today's background image is Sheep Meadow in Central Park.

October 20, 2014 05:33 PM

UnixOverflow

FreeBSD 10: CARP & LAGG interface incorrectly chooses state

I'm having an issue with my CARP configuration on a pair of FreeBSD 10 servers acting as NAS units. I'm using CARP so that the servers share a virtual IP with the goal of making failover somewhat transparent from a client point of view. Each server has two LAGG interfaces using LACP bonding, which is enabled/configured on the switch. The interface configuration is as follows, with the secondary node being nearly identical except that the carp aliases have an advskew of 100:

## Networking Configuration
hostname="zfs0"
defaultrouter="192.168.10.17"

ifconfig_igb0="up"
ifconfig_igb1="up"
ifconfig_igb2="up"
ifconfig_igb3="up"
ifconfig_igb4="up"
ifconfig_igb5="up"

# Set up bonded interfaces
cloned_interfaces="lagg0 lagg1"
ifconfig_lagg0="laggproto lacp laggport igb4 laggport igb5"
ipv4_addrs_lagg0="192.168.10.12/24"

# LACP bond for SAN ports
ifconfig_lagg1="laggproto lacp laggport igb0 laggport igb1 laggport igb2 laggport igb3"
ipv4_addrs_lagg1="10.0.0.120/24"

# CARP (Common Address Redundancy Protocol)
ifconfig_lagg0_alias0="vhid 9 pass somepass alias 192.168.10.9/32"
ifconfig_lagg1_alias0="vhid 1 pass somepass alias 10.0.0.110/32"

The issue I'm having is that if one of the members of a LAGG interface goes down (for example, cable unplugged), the CARP subsystem immediately drops into BACKUP and fails over to the other node. This should not be happening, of course. The expected behavior is that the CARP subsystem should only failover once the lagg0 or lagg1 interface goes down completely (all member interfaces are dead).

The other issue, which is possibly related, is that CARP between the two systems does not correctly fail-back when the system with lower advertisement is back up.

If any FreeBSD guru could jump in and tell me what I'm doing wrong, i'd greatly appreciate it.

Oh, for the record:

root@zfs0:~ # uname -a
FreeBSD zfs0 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #0 r271180: Fri Sep  5 12:33:58 PDT 2014     root@zfs0:/usr/obj/usr/src/sys/GENERIC  amd64

by cathode at October 20, 2014 05:28 PM

Dave Winer

StackOverflow

Is it possible to have compiler specific code sections in scala

I have a situation where, I need to certain functionality that is available in Spark library version 1.1.0, But I have two different platforms I need to run this application one. One uses Spark 1.1.0 and the other uses Spark 0.9.1. The functionality available in Spark 1.1.0 is not available in Spark 0.9.1.

That said, is it possible to have some compiler flags in the scala code, so that when compiling with Spark 1.1.0 certain code gets compiled and when compiling using the Spark 0.9.1. library another piece of code gets compiled?

like so :

#ifSpark1.1.0
val docIdtoSeq: RDD[(String, Long)] = listOfDocIds.zipWithIndex()
#endifSpark1.1.0

#ifSpark0.9.1
    val docIdtoSeq: RDD[(String, Long)] = listOfDocIds.mapPartitionsWithIndex{case(partId,it) => it.zipWithIndex.map{case(el,ind) => (el,ind+partId*constantLong)}}
#endifSpark0.9.1

Many thanks

by Ramdev at October 20, 2014 05:13 PM

Lobsters

Dave Winer

New Scripting News home page coming

The reason things are so sparse here the last week or so is that there's a new version of Scripting News coming, and all my energies are focused there.

It'll have the linkblog and the river on the same page with the blog posts in a tabbed interface.

It'll be using all the latest JavaScript technology. Radio3, River4, Fargo, Little Card Editor etc.

It's time that things start rolling up.

Still diggin as someone said once a long time ago.

A picture of a slice of cheese cake.

October 20, 2014 05:10 PM

New Scripting News home page coming

The reason things are so sparse here the last week or so is that there's a new version of Scripting News coming, and all my energies are focused there.

It'll have the linkblog and the river on the same page with the blog posts in a tabbed interface.

It'll be using all the latest JavaScript technology. "Radio3", "River4", "Fargo", "Little Card Editor" etc.

It's time that things start rolling up. ;-)

Still diggin as someone said once a long time ago.

"cheesecake"

October 20, 2014 05:10 PM

StackOverflow

How to install fail2ban using ansible?

I want to use install fail2ban across a range of servers using ansible, ansible is already installed and set up (someone else done this) but my main problem is trying to understand how to create a playbook (which if I'm correct will install fail2ban across the range of servers).

Oh I am also using the jail.conf file from a previous machine which I installed fail2ban on manually as I want the configuration (such as time to ban people, who's on the whitelist etc) to be the same across all the servers!

This is my first ever post so if I've forgotten anything please be gentle!

by Toby Applegate at October 20, 2014 05:04 PM

Lobsters

StackOverflow

Redefining a let'd variable in Clojure loop

OK. I've been tinkering with Clojure and I continually run into the same problem. Let's take this little fragment of code:

(let [x 128]
  (while (> x 1)
    (do
      (println x)
      (def x (/ x 2)))))

Now I expect this to print out a sequence starting with 128 as so:

128
64
32
16
8
4
2

Instead, it's an infinite loop, printing 128 over and over. Clearly my intended side effect isn't working.

So how am I supposed to redefine the value of x in a loop like this? I realize this may not be Lisp like (I could use an anonymous function that recurses on it's self, perhaps), but if I don't figure out how to set variable like this, I'm going to go mad.

My other guess would be to use set!, but that gives "Invalid assignment target", since I'm not in a binding form.

Please, enlighten me on how this is supposed to work.

by MBCook at October 20, 2014 05:02 PM

CompsciOverflow

Efficient method of tracking and storing relationships of objects in a tree structure

I'm looking for an efficient method of storing relationships of objects (people) in a tree structure (pedigree) that my software is crawling. So, if my software starts searching up a tree to an individual and then starts searching the decedents of that individual. How can I store the relationship between 2 not very closely related individuals?

enter image description here

Ideally, I will have a way of knowing the relationship between any two individuals my software has crawled. Does that make sense?

Thanks for any tips or thoughts!

by exvance at October 20, 2014 04:47 PM

StackOverflow

Slick way to (bulk) update one-to-many join table efficiently

For simplification let's say I have

case class Department(id:Int)
case class DepartmentEmployeeJoin(id:Int, deptId: Int, employeeId: Int)

(these are for illustration purposes - real life objects are different but have same relationship). DepatmentEmployeeJoin is a (1:many) join table between Department and Employee table.

Need to develop an API which will be responsible to insert/update DepartmentEmployeeJoin. Say call was for deptId=1. API will have to look up DepartmentEmployeeJoin and if it finds records for deptId=1 from DepartmentEmployeeJoin then update or insert employeeId. note that id column in DepartmentEmployeeJoin is a AutoInc column.

What's the best way to bulk update/insert/delete into DepartmentEmployeeJoin table with Slick?

by user2066049 at October 20, 2014 04:37 PM

/r/scala

ANN] v1.0 release of s_mach.concurrent a utility library for asynchronous tasks and scala.concurrent.Future

Hello /r/scala!

I am very happy to announce the first release of s_mach.concurrent https://github.com/S-Mach/s_mach.concurrent

s_mach.concurrent is an open-source Scala library that provides asynchronous serial and parallel execution flow control primitives for working with asynchronous tasks. An asynchronous task consists of two or more calls to function(s) that return a future result A ⇒ Future[B] instead of the result A ⇒ B. s_mach.concurrent also provides utility & convenience code for working with scala.concurrent.Future.

  • Adds concurrent flow control primitives async and async.par for performing fixed size heterogeneous (tuple) and variable size homogeneous (collection) asynchronous tasks. These primitives:
    • Allow enabling optional progress reporting, failure retry and/or throttle control for asynchronous tasks
    • Ensure proper sequencing of returned futures, e.g. given f: Int ⇒ Future[String]:
      • List(1,2,3).async.map(f) returns Future[List[String]]
      • async.par.run(f(1),f(2),f(3)) returns Future[(String,String,String)]
    • Ensures fail-immediate sequencing of future results
    • Ensures all exceptions generated during asynchronous task processing can be retrieved (Future.sequence returns only the first)
    • collection.async and collection.async.par support collection operations such as map, flatMap and foreach on asynchronous functions, i.e. A ⇒ Future[B]
    • async.par.run(future1, future2, …) supports running fixed size heterogeneous asynchronous task (of up to 22 futures) in parallel
  • Adds ScheduledExecutionContext, a Scala interface wrapper for java.util.concurrent.ScheduledExecutorService that provides for scheduling delayed and periodic tasks
  • Adds non-blocking concurrent control primitives such as Barrier, Latch, Lock and Semaphore
  • Provides convenience methods for writing more readable, concise and DRY concurrent code such as Future.get, Future.toTry and Future.fold

I look forward to your feedback.

Thanks and may your day be awesome!

submitted by lancegatlin
[link] [2 comments]

October 20, 2014 04:31 PM

TheoryOverflow

How can I find the second cheapest spanning tree?

The classic Mininum Spanning Tree (MST) algorithms can be modified to find the Maximum Spanning Tree instead.

Can an algorithm such as Kruskal's be modified to return a spanning tree that is strictly more costly than an MST, but is the second cheapest? For example, if you switch one of the edges in this spanning tree, you end up with an MST and vice versa.

My question, though, is simply: How can I find the second cheapest spanning tree, given a graph $G$ with an MST?

by user3783608 at October 20, 2014 04:27 PM

StackOverflow

How to check if two lists are partially identical haskell

This is my code im trying to check if a list can be paritially identical into another. It is a game of dominoes a Domino=(Int,Int) and a Board = [Domino] and an end either left or right. I'm to check if any domino goes into a board say for example can domino (2,3) go into board [(3,4)(5,6)] is should be able to go to the left end because (2,3) and (3,4) have have a similar element. Here is my code

goesP :: Domino -> Board -> End -> Bool

goesP (h,t) [(h1,t1)] LeftEnd
      | h==h1 || t==h1 =True
      | otherwise False
goesP (h,t) [(h1,t1)] RightEnd
      | h==t1 || t==t1 = True 
      | otherwise False

by igolo at October 20, 2014 04:07 PM

DataTau

High Scalability

Facebook Mobile Drops Pull For Push-based Snapshot + Delta Model

We've learned mobile is different. In If You're Programming A Cell Phone Like A Server You're Doing It Wrong we learned programming for a mobile platform is its own specialty. In How Facebook Makes Mobile Work At Scale For All Phones, On All Screens, On All Networks we learned bandwidth on mobile networks is a precious resource. 

Given all that, how do you design a protocol to sync state (think messages, comments, etc.) between mobile nodes and the global state holding servers located in a datacenter?

Facebook recently wrote about their new solution to this problem in Building Mobile-First Infrastructure for Messenger. They were able to reduce bandwidth usage by 40% and reduced by 20% the terror of hitting send on a phone.

That's a big win...that came from a protocol change.

Facebook Messanger went from a traditional notification triggered full state pull:

by Todd Hoff at October 20, 2014 03:56 PM

CompsciOverflow

Offline scheduling fully determined arbitrary jobs in multiprocessor setting

Let $\mathcal{J} = \{J_1,...,J_n\}$ be a set of jobs with each $J_i = [a_i,r_i,d_i]$ where the job becomes available at its arrival time $a_i$, requires $r_i$ execution time and needs to be finished at its deadline $d_i$ (hard real time). Assume we have $m$ processors available.

Given the above, what scheduling algorithms are available to deal with this? I'm aware of a publication by Horn that gives a solution using flows; since that was in 1974 and flows are comparatively slow, I tried to find faster algorithms for this setting, but couldn't find any. The book on scheduling by Pinedo suggested in this answer on CS.SE mentions the setting and that it can be solved using flows, but unfortunately nothing else. Apart from these two sources, I have been unsuccessful.

Are there known algorithms to solve the above problem that are faster than flow networks on $\Theta(n)$ nodes?

by G. Bach at October 20, 2014 03:38 PM

StackOverflow

what is the use of .map function, and what does that _. mean in scala

I am new in scala language and following the tutorial from the book play for scala here is the code

package models
case class Product(ean: Long, name: String, description: String)
object Product {
var products = Set(
Product(5010255079763L, "Paperclips Large",
"Large Plain Pack of 1000"),
Product(5018206244666L, "Giant Paperclips",
"Giant Plain 51mm 100 pack"),
Product(5018306332812L, "Paperclip Giant Plain",
"Giant Plain Pack of 10000"),
Product(5018306312913L, "No Tear Paper Clip",
"No Tear Extra Large Pack of 1000"),
Product(5018206244611L, "Zebra Paperclips",
"Zebra Length 28mm Assorted 150 Pack")
)
def findAll = this.products.toList.sortBy(_.ean)
def findByEan(ean: Long) = this.products.find(_.ean == ean)
def save(product: Product) = {
findByEan(product.ean).map( oldProduct =>
this.products = this.products - oldProduct + product
).getOrElse(
throw new IllegalArgumentException("Product not found")
)
}
}

above is the full code I have some problem in understanding some line of code please help me

def findByEan(ean: Long) = this.products.find(_.ean == ean)

what is _. why its used in this line _.ean

what does fine method returns

findByEan(product.ean).map( oldProduct =>this.products = this.products - oldProduct + product
)

what is the use of .map built in method

by user3801239 at October 20, 2014 03:36 PM

/r/types

/r/compsci

StackOverflow

Get all active Routes / Paths in a running Play application

Is there a standard way to get all possible (excluding wildcards of course) routes / paths valid within a play application ?

I can do it with

Play.current.routes.map( _.documentation.map(_._2))

which gives me all available routes but it looks a bit hacky to me.

by Andreas Neumann at October 20, 2014 03:12 PM

How to code the Vehicle Routing Algorithm using clojure?

I was reading the Vehicle Routing Problem in Algorithms, and wanted to code it using clojure. Can anybody help me with how to code it in clojure? I coded it using C++ and Java, and want to try it with clojure now. But, being new at this language, I can't proceed further.

So, I having a set of delivery vehicles and I want to write a function that will take the number of vehicles and a set of locations that need to be visited as an input, like this:

(vehicle-routing 4 [[-12 44] [22 19] [-57 -80] [-12 90]] 4 [[1 1] [40 40] [29 29] [-50 -50] [-75 1] [20 -90] [0 0]])

And I need to produce routes for each vehicle that minimizes the total distance traveled. Each location: [x y], a pair of integers.

So, I was thinking of first assigning locations randomly to each vehicle, and then finding the best route for each vehicle, so I've written a traveling salesman problem for that. Then I was thinking of swapping a pair of locations between two vehicles and finding the best route for each and seeing if that is an improvement; if not, put the locations back where they were in the respective routes.

My code in C++ in a follows:

const int FINAL_EVAL = 1285;
const int NUMSITES = 20;

// simulated annealing
const float INITIAL_TEMP = 100;
const int ITERATION = 200;
const float FINAL_TEMP = 0.01;
const float DECREMENT_RULE=0.95;

// genetic algorithm
const float CROSSOVER_PROB = 0.7;
const float MUTATION_PROB = 0.2;
const int POPULATION = 500;
const int GENERATIONS = 100;

struct Roulette_piece {
    float end;
    int index;
};

/*returns a command line parameter if it exist*/
char* getCmdOption(char ** begin, char ** end, const std::string & option)
{
    char ** itr = std::find(begin, end, option);
    if (itr != end && ++itr != end)
    {
        return *itr;
    }
    return 0;
}

/*load the initial permutation p into its array*/
void load_initial_permutation(int arr[NUMSITES], int line_num)
{
    ifstream ifile("perm.txt");
    string line;
    int row = 0;
    if (ifile.is_open()){
        while(!ifile.eof())
        {
            getline(ifile,line);
            if (line_num == row)
            {

                int line_index = 0;
                int column = 0;
                for (;line_index<line.length();){
                    int pos = line.find(",", line_index);
                    if (pos != string::npos)
                    {
                        arr[column] = atoi((line.substr(line_index, pos-line_index)).c_str());
                        line_index = pos + 1;
                    }
                    else
                    {
                        arr[column] = atoi((line.substr(line_index, line.length() - line_index - 1)).c_str());
                        line_index = line.length();
                    }
                    column++;
                }
                return;
            }
            row++;

        }
    }
    cout<<"Fatal: no such perm entry" << endl;
    exit(1);
}

/*load the initial data matrices (flow and distance) from file into passed in array*/
void populate_array(int arr[NUMSITES][NUMSITES], string file){

    ifstream ifile(file.c_str());
    string line;
    int row = 0;
    if (ifile.is_open()){
        while(!ifile.eof()){
            getline(ifile,line);
            int line_index = 0;
            int column = 0;
            for (;line_index<line.length();){
                int pos = line.find(" ", line_index);
                if (pos != string::npos)
                {
                    arr[row][column] = atoi((line.substr(line_index, pos-line_index)).c_str());
                    line_index = pos + 1;
                }
                else
                {
                    arr[row][column] = atoi((line.substr(line_index, line.length() - line_index - 1)).c_str());
                    line_index = line.length();
                }
                column++;
            }
            row++;
        }
    }

    for (int i = 0;i<NUMSITES;i++){
        for (int j = 0; j< NUMSITES;j++)
            ;//arr[i][j] = 0;;
    }
}

void print(int p[NUMSITES]) {
    cout << "[";
    for (int i=0; i<NUMSITES; i++) {
        cout << p[i] << ",";
    }
    cout << "]";
}

int compute_result(int p[NUMSITES], int flow[NUMSITES][NUMSITES], int dist[NUMSITES][NUMSITES]){
    int sum = 0;
    for (int i = 0;i<NUMSITES;i++){
        for (int j = i; j< NUMSITES;j++)
        {
            sum += flow[p[i]-1][p[j]-1] * dist[i][j];
        }
    }
    return sum;
}

//swap two items in the permutation array
void swap(int p[NUMSITES], int i, int j)
{
    int temp = p[i];
    p[i] = p[j];
    p[j] = temp;
}

// generate random number between 0 and 1
float randomZeroAndOne()
{
    float scale=RAND_MAX+1.;
    float base=rand()/scale;
    float fine=rand()/scale;
    return base+fine/scale;
}

void simulated_annealing(int p[NUMSITES], int flows[NUMSITES][NUMSITES], int dists[NUMSITES][NUMSITES]) {

    // compute initial state valuation
    int previous_eval = compute_result(p,flows,dists);
    int current_eval = 10000;
    int iteration = 0;
    float current_temp = INITIAL_TEMP;
    int total_iteration = 0;

    while (current_temp > FINAL_TEMP) {
    //while (current_eval > FINAL_EVAL + 100) {
        while (iteration < ITERATION) {
            // get solution (randomly get a solution)
            int i = rand() % NUMSITES;
            int j = 0;
            do {
                j = rand() % NUMSITES;
            } while (j == i);

            swap(p,i,j);

            // calculate result
            current_eval = compute_result(p,flows,dists);
            if (current_eval - previous_eval >= 0) {
                // calculate probabilities
                float prob = randomZeroAndOne();
                float temp_prob = exp(-((float)(current_eval-previous_eval))/current_temp);
                if (prob > temp_prob) {
                    // revert move
                    swap(p,i,j);
                } else {
                    // keep move and update evaluation
                    previous_eval = current_eval;
                }
            } else {
                // keep move and update evaluation
                previous_eval = current_eval;
            }

            total_iteration++;
            iteration++;
            cout << "evaluation = " << current_eval << endl;
        }

        // decrease temperature
        current_temp *= DECREMENT_RULE;

        // reset iteration count
        iteration = 0;

    }

    cout << "Iteration Count = " << total_iteration << endl;
    cout << "Final Evaluation = " << current_eval << endl;
}

void crossover(int individual1[], int individual2[]) {
    // randomly select 2 points
    int a = rand() % NUMSITES;
    int b = rand() % NUMSITES;
    if (a > b) {
        // swap a and b
        int temp = a;
        a = b;
        b = temp;
    }

    // copy the stuff in between
    int child1[NUMSITES];
    int child2[NUMSITES];
    memset(&child1, 0, sizeof(child1));
    memset(&child2, 0, sizeof(child2));
    for (int j=0; j<NUMSITES; j++) {
        if (j >= a && j < b) {
            child1[j] = individual1[j];
            child2[j] = individual2[j];
        }
    }

    // copy the rest in order
    int combined1[NUMSITES*2];
    int combined2[NUMSITES*2];
    for (int i=0; i<NUMSITES; i++) {
        combined1[i] = individual1[i];
        combined1[NUMSITES+i] = individual1[i];
        combined2[i] = individual2[i];
        combined2[NUMSITES+i] = individual2[i];
    }
    // child1
    bool flag = false;
    int k = b;
    for (int i=b; i<b+NUMSITES; i++) {
        for (int j=a; j<b; j++) {
            if (combined2[i] == combined1[j]) {
                flag = true;
                break;
            }
        }
        if (!flag) {
            child1[k] = combined2[i];
            k++;
            if (k >= NUMSITES) {
                k = 0;
            }
        }
        flag = false;
    }
    // child2
    flag = false;
    k = b;
    for (int i=b; i<b+NUMSITES; i++) {
        for (int j=a; j<b; j++) {
            if (combined1[i] == combined2[j]) {
                flag = true;
                break;
            }
        }
        if (!flag) {
            child2[k] = combined1[i];
            k++;
            if (k >= NUMSITES) {
                k = 0;
            }
        }
        flag = false;
    }

    // copy array into individual 1 and individual 2
    for (int i=0; i<NUMSITES; i++) {
        individual1[i] = child1[i];
        individual2[i] = child2[i];
    }
}

void mutate(int individual[]) {
    // perform insert mutation
    int a = rand() % NUMSITES;
    int b = 0;
    do {
        b = rand() % NUMSITES;
    } while (a == b);
    if (a > b) {
        int temp = a;
        a = b;
        b = temp;
    }

    int i = b;
    int temp = individual[b];
    for (; i>a+1; i--) {
        individual[i] = individual[i-1];
    }
    individual[i] = temp;
}

void copy(int to[], int from[], int size) {
    for (int i=0; i<size; i++) {
        to[i] = from[i];
    }
}

void copy2D(int to[POPULATION][NUMSITES], int from[POPULATION][NUMSITES]) {
    for (int i=0; i<POPULATION; i++) {
        for (int j=0; j<NUMSITES; j++) {
            to[i][j] = from[i][j];
        }
    }
}

void shuffle(int p[NUMSITES]) {
    // shuffle
    for (int j=NUMSITES-1; j>0; j--) {
        int temp = rand() % (j+1);
        swap(p,temp,j);
    }
}

void genetic_algorithm(int p[NUMSITES], int flows[NUMSITES][NUMSITES], int dists[NUMSITES][NUMSITES]) {

    cout << "genetic algorithm started" << endl;

    // setup initial solutions (randomly pick solutions)
    int population[POPULATION][NUMSITES];
    for (int i=0; i<POPULATION; i++) {
        // copy
        for (int j = 0; j<NUMSITES; j++) {
            population[i][j] = p[j];
        }
        // shuffle
        shuffle(population[i]);
    }

    cout << "setup initial population" << endl;

    for (int genCount=0; genCount<GENERATIONS; genCount++) {
        // calculate fitness
        int fitness[POPULATION];
        float totalFitness = 0;
        for (int i=0; i<POPULATION; i++) {
            fitness[i] = compute_result(population[i], flows, dists);
            totalFitness += 2000/fitness[i];
        }

        // print best solution
        int best = INT_MAX;
        int bestIndividual = 0;
        for (int i=0; i<POPULATION; i++) {
            if (fitness[i] < best) {
                best = fitness[i];
                bestIndividual = i;
            }
        }
        cout << "best individual: fitness=" << best << ", solution=";
        print(population[bestIndividual]);
        cout << endl;

        // setup roulette
        Roulette_piece roulette[POPULATION];
        float count = 0;
        for (int i=0; i<POPULATION; i++) {
            count += (float)2000/(float)fitness[i]/totalFitness;
            roulette[i].end = count;
            roulette[i].index = i;
        }

        // perform roulette parent selection
        int nextGen[POPULATION][NUMSITES];
        float temp = 0;
        for (int j=0; j<POPULATION; j++) {
            temp = randomZeroAndOne();
            for (int i=0; i<POPULATION; i++) {
                if (temp <= roulette[i].end) {
                    copy(nextGen[j],population[roulette[i].index],NUMSITES);
                }
            }
        }

        // cross over and mutate
        for (int i=0; i<POPULATION; i+=2) {
            // perform oder-1 crossover
            if (randomZeroAndOne() <= CROSSOVER_PROB) {
                crossover(nextGen[i], nextGen[i+1]);
            }

            // perform mutation
            if (randomZeroAndOne() <= MUTATION_PROB) {
                mutate(nextGen[i]);
                mutate(nextGen[i+1]);
            }
        }

        // replace population with next generation
        bool eliteCopied = false;
        for (int i=0; i<POPULATION; i++) {
            if (i == bestIndividual && !eliteCopied) {
                eliteCopied = true;
            } else {
                for (int j=0; j<NUMSITES; j++) {
                    population[i][j] = nextGen[i][j];
                }
            }
        }
    }

    cout << "genetic algorithm finished" << endl;
}

int main(int argc, char* argv[]) {
    int p[NUMSITES];
    int flows[NUMSITES][NUMSITES];
    int dists[NUMSITES][NUMSITES];

    memset( &flows, 0, sizeof(flows));
    memset( &dists, 0, sizeof(dists));
    memset( &p, 0, sizeof(p));

    // initialize random seed
    srand(time(NULL));

    // get initial permutation line number
    int perm = atoi(getCmdOption(argv,argv+argc,"-perm"));

    // setup problem
    load_initial_permutation(p, perm);
    populate_array(flows, "flow.txt");
    populate_array(dists, "distance.txt");

    // SA
    simulated_annealing(p, flows, dists);
    //genetic_algorithm(p, flows, dists);

    /*
    int p2[NUMSITES];
    copy(p2,p,NUMSITES);
    shuffle(p2);
    cout << "orignal" << endl;
    print(p);
    cout << endl;
    print(p2);
    cout << endl;
    crossover(p,p2);
    cout << "after" << endl;
    print(p);
    cout << endl;
    print(p2);
    cout << endl;
    */

    return 0;

Please help me how to code this using clojure?

by Liz Wang at October 20, 2014 03:06 PM

TheoryOverflow

Prove that every language in P can be polynomially reduced to any other language in P

How can we prove this generalized statement??
P contains finite languages, like {0,1} , along with infinite languages.
I don't need the total proof, I just need the basic idea or intuition.

by Praveen at October 20, 2014 03:05 PM

/r/emacs

CompsciOverflow

Proof of big theta using induction [duplicate]

This question already has an answer here:

Here is a recursive definition for the runtime of some unspecified function. $a$ and $c$ are positive constants.

$T(n) = a$, if $n = 2$

$T(n) = 2T(n/2) + cn$ if $n > 2$ Use induction to prove that $T(n) = \Theta(n \log n)$

Any idea on how to solve this?

by Carol Doner at October 20, 2014 03:01 PM

What if $NP\subseteq BPP$?

I'm new to complexity and came upon the following exercise which I'm unable to solve.

Prove that if $NP\subseteq BPP$ then $\Sigma_2^p=\Pi_4 ^p$.

by Rock at October 20, 2014 02:59 PM

/r/netsec

Lobsters

StackOverflow

SBT - Run Task to set a SettingKey

So my general problem is that I want to set the version key based on the result of a task. However the version key is set before the task is run. From what I understand I can't change the value of a key once it is set so I can't change this within my task.

What I want to do is run the task as a dependency to the publish task and change the value for version. I feel like there must be a way to do this, but I have no clues at the moment. Any help would be greatly appreciated.

by John at October 20, 2014 02:44 PM

Lobsters

StackOverflow

Conditional function in APL

Is there a symbol or well-known idiom for the conditional function, in any of the APL dialects?

I'm sure I'm missing something, because it's such a basic language element. In other languages it's called conditional operator, but I will avoid that term here, because an APL operator is something else entirely.

For example C and friends have x ? T : F
LISPs have (if x T F)
Python has T if x else F
and so on.

I know modern APLs have :If and friends, but they are imperative statements to control program flow: they don't return a value, cannot be used inside an expression and certainly cannot be applied to arrays of booleans. They have a different purpose altogether, which is just fine by me.

The only decent expression I could come up with to do a functional selection is (F T)[⎕IO+x], which doesn't look particularly shorthand or readable to me, although it gets the job done, even on arrays:

      ('no' 'yes')[⎕IO+(⍳5)∘.>(⍳5)]
no  no  no  no  no
yes no  no  no  no
yes yes no  no  no
yes yes yes no  no
yes yes yes yes no

I tried to come up with a similar expression using squad , but failed miserably on arrays of booleans. Even if I could, it would still have to embed ⎕IO or an hardcoded 1, which is even worse as far as readability is concerned.

Before I go ahead and define my own if and use it on every program I will ever write, is there any canon on this? Am I missing an obvious function or operator?

(Are there any APL programmers on SO? :-)

by Tobia at October 20, 2014 02:35 PM

TheoryOverflow

Chomsky normal form method: CYK parser performance implications?

Chart parsers can be implemented based on Chomsky normal form or directly based on production rules. Lets for the moment assume we have a CYK chart parser that uses Chomsky normal form. The binarization is not uniquely defined. Does this impact the performance of the CYK chart parse. Can this be exploited to improve the performance of a CYK chart parser?

by user4258 at October 20, 2014 02:31 PM

StackOverflow

eclipse command in Play framework

In a play application I found that "eclipse" command works by default without adding "sbt eclipse" plugin in the plugins.sbt file. However in case of Sbt this works only if this plugin definition is added. I was just wondering if Play is a wrapper over SBT with additional features available by default?

by Prathik Puthran at October 20, 2014 02:26 PM

Trimming strings in Scala

How do I trim the starting and ending character of a string in Scala

If the input is

,hello   (or)   hello,

I need the output as hello

Is there is any built-in method to do this in Scala?

by raHul at October 20, 2014 02:12 PM

apache spark textfile to a string

val test= sc.textFile(12,logFile).cache()

In the above code snippet, I am trying to make apache spark to parallelize reading a huge text file. How do i store the contents of this onto a string ?

I was earlier doing this to read

val lines = scala.io.Source.fromFile(logFile, "utf-8").getLines.mkString

but then now i am trying to make the read faster using spark context.

by Siva at October 20, 2014 01:58 PM

/r/emacs

Some regex-related functionality that people might find useful

A couple of Emacs devs have shared some code for visualising ELisp REs, as attachments to this message and this message; further details in the messages themselves.

submitted by flexibeast
[link] [1 comment]

October 20, 2014 01:55 PM

StackOverflow

With Underscore, how do I recursively flatten an array of objects?

I have a tree/traversable object that looks like this:

                var data = {children: [
                {
                name: 'foo',
                url: 'http://foo',
                children: [
                    {
                    name: 'bar',
                    url: 'http://bar',
                    children: []
                }
                ]
            },
            {
                name: 'baz',
                url: 'http://baz',
                children: []
            },
            {
                name: 'biff',
                children: []
            }
            ]};

What I need to do is be able to flatten the data into a single dimensional list:

var flattenedData = [{name: 'foo', url: 'http://foo'}, {name: 'bar', url: 'http://bar'}, {name: 'baz', url: 'http://baz'}, {name: 'biff'}];

Currently, I've created a recursive helper function to walk the data structure and push the results onto an array. I'd like to do this more functionally if possible. Something like:

var flattenedData = _.chain(data.children).flatten().filter(function(item){//real filtering; return item;}).value();

The problem is, flattening doesn't seem to flatten an array of objects, just simple arrays. I could be wrong.

How would I perform this task in a more functional way without traversing the tree in a helper function?

by Jim Wharton at October 20, 2014 01:43 PM

Compojure-specific destructuring and query strings

I'm trying to access the parameter foo, using compojure, in a request like this:

/api/xyz?foo=bar 

The compojure destructuring syntax looks good, so I would like to use it. However the following just serves me the "Page not found":

(defroutes app-routes    
  (GET "/api/xyz/:foo" [foo] (str "foo: " foo))
  (route/not-found "Page not found"))

Which is kind of weird, since the verbose destructuring below works and gives me "foo: bar":

(defroutes app-routes    
  (GET "/api/xyz" {{foo :foo} :params} (str "foo: " foo))
  (route/not-found "Page not found"))

What am I missing?

by 4ZM at October 20, 2014 01:41 PM

AWS

AWS Week in Review - October 13, 2014

Let's take a quick look at what happened in AWS-land last week:

Monday, October 13
Tuesday, October 14
Wednesday, October 15
Thursday, October 16
Friday, October 17

Here are some of the events that we have on tap for the next week or two:

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at October 20, 2014 01:35 PM

StackOverflow

Best practice to setup env variables with ansible

I'm just starting with Ansible, so to be more specific i wanted to setup several zookeeper vagrant machines and for this i've made a separate role - zookeeper. Install task was easy, but what's the best was to configure env vars in general? In my use case i've added two vars into vars/main.yml:

---
env:
  ZOOKEEPER_INSTALL: "/usr/share/zookeeper"
  PATH: "$PATH:$ZOOKEEPER_INSTALL/bin"

Then added this to the role task:

- name: Export ZooKeeper env vars
  shell: export env

But i didn't find this vars in printenv. Then i've tried to change shell module with environment, but it seems to do another kind of stuff. So what's the proper way to set env vars, so that they would work even after server reboot/vagrant halt?

by 4lex1v at October 20, 2014 01:23 PM

AWS

Fast, Easy, Free Data Sync from RDS MySQL to Amazon Redshift

As you know, I'm a big fan of Amazon RDS. I love the fact that it allows you focus on your applications and not on keeping your database up and running. I'm also excited by the disruptive price, performance, and ease of use of Amazon Redshift, our petabyte-scale, fully managed data warehouse service that lets you get started for $0.25 per hour and costs less than $1,000 per TB per year. Many customers agree, as you can see from recent posts by Pinterest, Monetate, and Upworthy.

Many AWS customers want to get their operational and transactional data from RDS into Redshift in order to run analytics. Until recently, it's been a somewhat complicated process. A few week ago, the RDS team simplified the process by enabling row-based binary logging, which in turn has allowed our AWS Partner Network (APN) partners to build products that continuously replicate data from RDS MySQL to Redshift.

Two APN data integration partners, FlyData and Attunity, currently leverage row-based binary logging to continuously replicate data from RDS MySQL to Redshift. Both offer free trials of their software in conjunction with Redshift's two month free trial. After a few simple configuration steps, these products will automatically copy schemas and data from RDS MySQL to Redshift and keep them in sync. This will allow you to run high performance reports and analytics on up-to-date data in Redshift without having to design a complex data loading process or put unnecessary load on your RDS database instances.

If you're using RDS MySQL 5.6, you can replicate directly from your database instance by enabling row-based logging, as shown below. If you're using RDS MySQL 5.5, you'll need to set up a MySQL 5.6 read replica and configure the replication tools to use the replica to sync your data to Redshift. To learn more about these two solutions, see FlyData's Free Trial Guide for RDS MySQL to Redshift as well as Attunity's Free Trial and the RDS MySQL to Redshift Guide. Attunity's trial is available through the AWS Marketplace, where you can find and immediately start using software with Redshift with just a few clicks.

Informatica and SnapLogic also enable data integration between RDS and Redshift, using a SQL-based mechanism that queries your database to identify data to transfer to your Amazon Redshift clusters. Informatica is offering a 60-day free trial and SnapLogic has a 30 day free trial.

All four data integration solutions discussed above can be used with all RDS database engines (MySQL, SQL Server, PostgreSQL, and Oracle). You can also use AWS Data Pipeline (which added some recent Redshift enhancements), to move data between your RDS database instances and Redshift clusters. If you have analytics workloads, now is a great time to take advantage of these tools and begin continuously loading and analyzing data in Redshift.

Enabling Amazon RDS MySQL 5.6 Row Based Logging
Here's how you enable row based logging for MySQL 5.6:

  1. Go to the Amazon RDS Console and click Parameter Groups in the left pane:
  2. Click on the Create DB Parameter Group button and create a new parameter group in the mysql5.6 family:
  3. Once in the detail view, click the Edit Parameters button. Then set the binlog_format parameter to ROW:
For more details please see Working with MySQL Database Log Files.

Free Trials for Continuous RDS to Redshift Replication from APN Partners
FlyData has published a step by step guide and a video demo in order to show you how to continuously and automatically sync your RDS MySQL 5.6 data to Redshift and you can get started for free for 30 days. You will need to create a new parameter group with binlog_format set to ROW and binlog_checksum set to NONE, and adjust a few other parameters as described in the guide above.

AWS customers are already using FlyData for continuous replication to Redshift from RDS. For example, rideshare startup Sidecar seamlessly syncs tens of millions of records per day to Redshift from two RDS instances in order to analyze how customers utilize Sidecar's custom ride services. According to Sidecar, their analytics run 3x faster and the near-real-time access to data helps them to provide a great experience for riders and drivers. Here's the data flow when using FlyData:

Attunity CloudBeam has published a configuration guide that describes how you can enable continuous, incremental change data capture from RDS MySQL 5.6 to Redshift (you can get started for free for 5 days directly from the AWS Marketplace. You will need to create a new parameter group with binlog_format set to ROW and binlog_checksum set to NONE.

For additional information on configuring Attunity for use with Redshift please see this quick start guide.

Redshift Free Trial
If you are new to Amazon Redshift, you’re eligible for a free trial and can get 750 free hours for each of two months to try a dw2.large node (16 GB of RAM, 2 virtual cores, and 160 GB of compressed SSD storage). This gives you enough hours to continuously run a single node for two months. You can also build clusters with multiple dw2.large nodes to test larger data sets; this will consume your free hours more quickly. Each month's 750 free hours are shared across all running dw2.large nodes in all regions.

To start using Redshift for free, simply go to the Redshift Console, launch a cluster, and select dw2.large for the Node Type:

Big Data Webinar
If you want to learn more, do not miss the AWS Big Data Webinar showcasing how startup Couchsurfing used Attunity’s continuous CDC to reduce their ETL process from 3 months to 3 hours and cut costs by nearly $40K.

-- Jeff;

by Jeff Barr (awseditor@amazon.com) at October 20, 2014 01:09 PM

StackOverflow

Scala: Abstract types vs generics

I was reading A Tour of Scala: Abstract Types. When is it better to use abstract types?

For example,

abstract class Buffer {
  type T
  val element: T
}

rather that generics, for example,

abstract class Buffer[T] {
  val element: T
}

by thatismatt at October 20, 2014 01:08 PM

Solving O/R Impedence mismatch using Scala + Slick

Let's say I have the following tables in my database:

CREATE TABLE dealers(
 id INT PRIMARY KEY, 
 name VARCHAR(255)
);    

CREATE TABLE makers(
 id INT PRIMARY KEY, 
 name VARCHAR(255)
);

CREATE TABLE cars(
 id INT PRIMARY KEY, 
 make INT FOREIGN KEY makers(id), 
 model VARCHAR(255), 
 year INT
);

CREATE TABLE cars_in_dealers(
 car_id INT FOREIGN KEY cars(id), 
 dealer_id INT FOREIGN KEY dealers(id), 
 UNIQUE KEY (car_id, dealer_id)
);

Given such a schema, I want to use Slick to load dealers in Scala:

case class Dealer(id: Int, name: String, models: Set[Car])
case class Car(id: Int, make: Maker, model: String, year: Int)
case class Maker(id: Int, name: String)

How about something a bit more complicated:

What if I wanted to keep track of count of models in each dealership:

case class Dealer(id: Int, name: String, models: Map[Car, Int])

and this was my mapping table instead:

CREATE TABLE cars_in_dealers(
 car_id INT FOREIGN KEY cars(id), 
 dealer_id INT FOREIGN KEY dealers(id), 
 count INT,
 UNIQUE KEY (car_id, dealer_id)
);

I am familiar with Ruby's ActiveRecord and Java's Hibernate framework where these things are easy to do but I am having a hard time doing it in Slick since Slick does not map nested models into foreign keyed tables. I am using Slick's codegen which only generates the following classes:

case class DealersRow(id: Int, name: String)
case class MakersRow(id: Int, name: String
case class CarsRow(id: Int, make: Int, model: String, year: Int)
case class CarsInDealersRow(carId: Int, dealerId: Int)

by wrick at October 20, 2014 01:08 PM

QuantOverflow

garchOxFit in R-oxo file does not match

Could someone please help me with trying to get the Ox interface to work in R. I get the following errors as output:

This version may be used for academic research and teaching only Link error: 'packages/Garch42/garch' please recompile .oxo file to match this version of Ox Hata oluştu: file(file, "r") : bağlantı açılamadı Ek olarak: Uyarı mesajları: 1: 'C:\Ox\bin\oxl.exe C:\Ox\lib\GarchOxModelling.ox' komutunu çalıştırırken 1 durumu oluştu 2: In file(file, "r") : dosya 'OxResiduals.csv' açılamadı: No such file or directory

by sema at October 20, 2014 01:05 PM

StackOverflow

Haskell Location Definition "No instance for (Fractional Int) arising from a use of ‘/’"

I'm getting the error "No instance for (Fractional Int) arising from a use of ‘/’", from inside a local definition in a function I'm trying to make which determines how many of the (three) given integers are above the average of all of the given integers. So I've created a local definition inside the function to calculate the average so I can then use it for guard checks. I've used the same code in a separate definition (in another file) to calculate the average and it works. I've tried putting the "fromIntegral" function call in different places but it's not working, where am I going wrong?

Here's the code:

howManyAboveAverage :: Int -> Int -> Int -> Int
howManyAboveAverage a b c
    | a > average && b > average = 2
    | a > average && c > average = 2
    | b > average && c > average = 2
    | a > average = 1
    | b > average = 1
    | c > average = 1
    | otherwise = 0
                where
                average = fromIntegral (a + b + c) / 3

The error is being flagged up on the last line.

Thanks.

by EM-Creations at October 20, 2014 01:02 PM

TheoryOverflow

What is the minimum size of a circuit that computes PARITY?

It is a classic result that every fan-in 2 AND-OR-NOT circuit that computes PARITY from the input variables has size at least $3(n-1)$ and this is sharp. (We define size as the number of AND and OR gates.) The proof is by gate-elimination and it seems to fail if we allow arbitrary fan-in. What is known for this case?

Specifically, does anyone know an example when larger fan-in helps, i.e., we need less than $3(n-1)$ gates?

Update Oct 18. Marzio has shown that for $n=3$ even $5$ gates suffice using the CNF form of PARITY. This implies a bound of $\lfloor \frac 52 n \rfloor-2$ for general $n$. Can YOU do better?

by domotorp at October 20, 2014 12:47 PM

Lobsters

What are you working on this week?

It’s Monday, which means it’s time for our weekly “What are you working on” thread! Please share links and tell us about your current project. Do you need feedback, proofreading, collaborators?

by japesinator at October 20, 2014 12:43 PM

Planet FreeBSD

EuroBSDCon Trip Report: Bjoern Heidotting

The FreeBSD Foundation was a gold sponsor of EuroBSDCon 2014, which was held in Sofia, Bulgaria in September. The Foundation also sponsored Bjoern Heidotting to attend the conference, who provides the following trip report:

Since I'm fairly new to the FreeBSD community I would like to introduce myself first. My name is Bjoern Heidotting, I live in Germany, I work as a system administrator and I'm a FreeBSD user since 2006 and a contributor since 2012. I mostly contribute patches for the German documentation in the doc-tree. Why do I contribute? Well, the short version is that I simply wanted to give something back to FreeBSD and the community.

Thanks to Benedict Reuschling, who invited me, and the FreeBSD Foundation, I was able to attend the DevSummit and the conference at EuroBSDCon 2014 in Sofia.

I arrived at Sofia airport on Wednesday and I took a taxi to get to my hotel the Best Western Expo, directly located at the IEC where the conference was held. However, the taxidriver decided to take me on a sightseeing tour through the city of Sofia. But after 1,5 hours I finally arrived at the hotel. The actual time to get from the airport to my hotel is about 10 minutes. Fortunately taxis are cheap in Bulgaria compared to Germany. And the city is really, really worth seeing.

Later that day, I met Daniel Peyrolon, a GSoC student with whom I shared a room. We decided to take dinner together and started getting to know each other. Afterwards, we socialized with some other FreeBSD people at the hotel bar.

On Thursday the DevSummit started with every attendee and developer introducing himself. Then some interesting topics and roadmaps were discussed for the upcoming 11.0 release, as well as other topics such as ASLR, UEFI, 10G Ethernet, just to name a few. It was a very interesting brainstorming with valuable input from all attendees. Since it was my first time at a DevSummit, I was impressed to see how fast these people can fill a bunch of foils with topics and ideas. Awesome!

After lunch a small group, including me, sat together in another room where I started to work on several patches for the Handbook. In the evening we had dinner at Lebed Restaurant. A very nice location. This is where I first met Deb Goodkin from the Foundation. She was the one I talked to prior to the conference and she brought Daniel and me together. Thank you Deb. It was very nice meeting her.

On Friday I mostly worked on a big patch for the network-servers section in the Handbook. I also met Beat Gaetzi while catching fresh air outside and we talked about our roles in the Project and what we do. After lunch the documentation topic started, which I was very interested in. We talked about issues on the website, Handbook sections, etc. The details of the session can be found on the wiki.

In the evening we had dinner at "The Windmill" and I met Henning Brauer from the OpenBSD project. It was really fun talking to him. Man, this guy can tell crazy stories.

Saturday and Sunday were conference days with one interesting talk chasing the next. All the talks were great, altough I had some favorites, including "Snapshots, Replication, and Boot-Environments" by Kris Moore, "Introducing ASLR in FreeBSD" by Shawn Webb, and "Securing sensitive & restricted data" by Dag-Erling Smorgrav. One of the highlights for me was the social event in Hotel Balkan on Saturday. Again, meeting the people behind the email addresses and talking to them was a great experience.

A big thanks goes out to Shteryana Shopova and her crew for organizing this great event.

by Dru Lavigne at October 20, 2014 12:36 PM

/r/clojure

Planet Emacsen

Irreal: Stallman on the History of Emacs and GNU

Here's an interesting video from 2002 of Richard Stallman talking about the history of Emacs and the GNU project. As far as I can tell, this is the talk whose transcript I wrote about 3 years ago. It's about 40 minutes so plan accordingly.

by jcs at October 20, 2014 12:33 PM

/r/dependent_types

Lobsters

StackOverflow

Installing Play Framework on Windows 8.1

I have been trying to install Play Framework on Windows 8.1 through "activator". When I ran activator script, it gave an error telling me it couldn't find Java. I didn't want to mess with environment variables so I ran the jar file inside the "activator" directory but now I get such error as below.

C:\activator-1.2.10>java -jar activator-launch-1.2.10.jar
java.lang.RuntimeException: Property 'activator.home' has not been set
        at activator.properties.ActivatorProperties.requirePropertyWithOverrides
(ActivatorProperties.java:64)
        at activator.properties.ActivatorProperties.ACTIVATOR_HOME(ActivatorProp
erties.java:118)
        at activator.ActivatorLauncher.openDocs(ActivatorLauncher.scala:42)
        at activator.ActivatorLauncher.displayHelp(ActivatorLauncher.scala:72)
        at activator.ActivatorLauncher.run(ActivatorLauncher.scala:32)
        at xsbt.boot.Launch$$anonfun$run$1.apply(Launch.scala:109)
        at xsbt.boot.Launch$.withContextLoader(Launch.scala:129)
        at xsbt.boot.Launch$.run(Launch.scala:109)
        at xsbt.boot.Launch$$anonfun$apply$1.apply(Launch.scala:36)
        at xsbt.boot.Launch$.launch(Launch.scala:117)
        at xsbt.boot.Launch$.apply(Launch.scala:19)
        at xsbt.boot.Boot$.runImpl(Boot.scala:44)
        at xsbt.boot.Boot$.main(Boot.scala:20)
        at xsbt.boot.Boot.main(Boot.scala)

It tells me that activator.home is not set but I didn't install the Play yet. Do I have to add something to system variables?

by zamk at October 20, 2014 12:21 PM

QuantOverflow

Data on margin volumes?

I came across a Financial Times article today that said "Peaks in margin trading have been a precursor to bear runs in the past, notably in March 2000 and July 2007."

I'm curious if anyone here would know if there is a common data source to get an aggregate sense of how much margin investors are taking on? Or is this just something that we might infer from interest rates in the economy? Historical data would be especially awesome of course.

It would also be interesting to hear if anyone has ever backtested a strategy using something like this, and if so how it worked out.

by jamos125 at October 20, 2014 12:07 PM

CompsciOverflow

Are "Flow Free" puzzles NP-hard?

A "Flow Free" puzzle consists of a positive integer $n$ and a set of (unordered) pairs of distinct
vertices in the $\:n\times n\:$ grid graph such that each vertex is in at most one pair. $\:$ A solution to such
a puzzle is a set of undirected paths in the graph such that each vertex is in exactly one path
and each path's set of ends is one of the puzzle's pairs of vertices. $\:$ This image is an example
of a Flow Free puzzle, and this image is an example of a solution to a different Flow Free puzzle.

Is the problem "Does there exist a solution to this Flow Free puzzle?" NP-hard?
Does it matter whether $n$ is given in unary or binary?

by Ricky Demer at October 20, 2014 12:06 PM

StackOverflow

ZMQ Produces EAGAIN on Publisher Socket

I've made a simple C++ program to start working with 0MQ. I have two applications: A server (with a binding publisher socket) and a client (with a connecting subscriber) socket. The server program is pulled and ran from a remote machine -- let's call it example.com.

ZMQ is producing an EAGAIN when the server sends a simple string message with no flags. I know this by viewing the ZMQ CPP binding I'm using; The socket_t::send() function returns false only when this error is raised. I am not using the function that returns an integer, I am certain the return value is boolean false.

The functionality persists when the client is running and connected. The machine has all incoming and outgoing ports open.

Why would ZMQ produce this error? In particular, EAGAIN should only be raised in non-blocking mode, but I specifically never ask for this mode. Is this a functionality of a publisher socket?

by Airzooka at October 20, 2014 12:03 PM

Clean up Play-framework based project

After running a new Play-framework 2.0 based project, i failed to clean it - the generated staff persisted below,

 $ play new myapp
   > app name: myapp
   > template: java app

myapp/
├── app
│   ├── controllers
│   └── views
├── conf
├── project
└── public
    ├── images
    ├── javascripts
    └── stylesheets

$ cd myapp
$ play
  [myapp] run 12345

CtrlD

  [myapp] clean

myapp/
├── app
│   ├── controllers
│   └── views
├── conf
├── logs
├── project
│   ├── project
│   │   └── target
│   │       └── config-classes
│   └── target
│       ├── scala-2.9.1
│       │   └── sbt-0.11.2
│       │       ├── cache
│       │       │   ├── compile
│       │       │   └── update
│       │       └── classes
│       └── streams
│           ├── compile
│           │   ├── compile
│           │   │   └── $global
│           │   ├── compile-inputs
│           │   │   └── $global
│           │   ├── copy-resources
│           │   │   └── $global
│           │   ├── defined-sbt-plugins
│           │   │   └── $global
│           │   └── $global
│           │       └── $global
│           └── $global
│               ├── compilers
│               │   └── $global
│               ├── ivy-configuration
│               │   └── $global
│               ├── ivy-sbt
│               │   └── $global
│               ├── project-descriptors
│               │   └── $global
│               └── update
│                   └── $global
├── public
│   ├── images
│   ├── javascripts
│   └── stylesheets
└── target

How can i succeed in cleaning it up?

by sof at October 20, 2014 11:34 AM

CompsciOverflow

Sorting with a recursive oracle

It is known that the runtime complexity of sorting is $\Theta (n \log n)$. But what if we have, for every input array of size $n$, an oracle that can sort any array of $k<n$ numbers in constant time?

In this case, the runtime of merge sort becomes $O(n)$. The recursive calls are cheap and the runtime is dominated by the merging step.

Does there exist a more efficient algorithm for sorting, using these oracles? My guess is that the answer is negative, i.e. sorting with recursive oracles has runtime complexity $\Theta(n)$. Is this correct?

NOTE: this is a special case of the following question from cstheory.SE L http://cstheory.stackexchange.com/questions/27094/are-there-problems-for-which-divide-and-conquer-is-provably-useless

by Erel Segal Halevi at October 20, 2014 11:32 AM

StackOverflow

Scala object struggles with Java Class.newInstance()

UPDATE:

I have somewhat resolved the issue. Just in case if anyone runs in the same problem, here is the simplest solution: Looking at the MTApplcation source code, I have discovered that the initialize() method can be overloaded, taking a String parameter for the name of the class to instantiate. So if I create a separate class that extends MTApplication and pass it's name there, everything works correctly.

END OF UPDATE

I have a situation in Scala while trying to use a java library (MT4j, which is based on Processing). The library wants to instantiate the main class of the app (the caller-class):

  Class<?> c = Thread.currentThread().getContextClassLoader().loadClass(name);
  applet = (PApplet) c.newInstance();

So as to refer it later in it's works.

However, it fails because, I guess, the main Scala class is not a class, but an object and due to library structure, it is necessary to call a static method initialize() of the main library class MTApplication. In Java static fields are located in classes, but in Scala - in objects. So it is impossible to instantiate an object and the library fails. In contrast to MT4j, Processing itself makes no calls to static methods on startup and successfully passes that phase.

If I just create a companion class, everything works fine except that the companion class does not get its fields initialized because the static initialize() method is called in companion object, the class instance just gets dead-born and the library becomes unusable.

At least that is how I understand this problem.

I get this error:

Exception in thread "main" java.lang.RuntimeException: java.lang.IllegalAccessException: Class processing.core.PApplet can not access a member of class main.Main$ with modifiers "private"
    at processing.core.PApplet.runSketch(PApplet.java:9103)
    at processing.core.PApplet.main(PApplet.java:9292)
    at org.mt4j.MTApplication.initialize(MTApplication.java:311)
    at org.mt4j.MTApplication.initialize(MTApplication.java:263)
    at org.mt4j.MTApplication.initialize(MTApplication.java:254)
    at main.Main$.main(Main.scala:26)
    at main.Main.main(Main.scala)

It is hard for me to explain also because I do not fully understand what is going on here. But anyone who has these libs can reproduce the situation in a couple of minutes, trying to launch the main class.

The abstract startUp() method which should be implemented to start the app, makes everything look even more sad. It initializes the object, but what the library tries to work with is an instance of the companion class which does not get initialized because in Scala the method belongs to the object.

My code:

object Main extends MTApplication {

    def main(args: Array[String]) {
        MTApplication.initialize()
        new Main().startUp()
    }

    //this method is abstarct so it MUST be implemented,
    override def startUp(){ 
    }

}

class Main extends MTApplication {

    override def startUp(){
       //startup here
    }
}

I am sorry if my explanations are vague, I just do not get it all completely. Probably to understand it is easier to repeat the experiment with MT4j library with Processing source code instead of the pre-linked 'core.jar' there to see what is happening inside. Doeas anyone have ideas on any workaround here?

by noncom at October 20, 2014 11:26 AM

SICP, Continuation Passing Style and Clojure's trampoline

I am working with SICP and exercise 2.29-b gave me the opportunity to have fun with the Continuation Passing Style while traversing mobiles and branches.

To make the story short, each mobile has left and right branch, which are composed by a length and either a numeric weight or another mobile. The question asks to find the total weight given a mobile.

After the first mutually recursive solution, quite simple, I tried and successfully implemented a cps' one:

(defn total-weight-cps [mobile]
  (letfn 
    [(branch-weight-cps
      [branch kont]
      (let [structure (branch-structure branch)]
        (if (mobile? (branch-structure branch))
          (do (println "then " structure) (kont (traverse-mobile-cps structure identity)))
          (do (println "else " structure) (kont structure)))))

     (traverse-mobile-cps
      [mobile kont]
      (branch-weight-cps (left-branch mobile)
                         (fn [left-weight]
                           (branch-weight-cps (right-branch mobile)
                                              (fn [right-weight] (kont (+ left-weight right-weight)))))))]

    (traverse-mobile-cps mobile identity)))

At this point, I have tried to apply the trampoline in order to preserve my stack. But it blows with the following exception:

java.lang.ClassCastException: sicp_clojure.2_1_exercises_2_24_2_32$total_weight_STAR_$traverse_mobile_cps__6694$fn__6695$fn__6696$fn__6697 cannot be cast to java.lang.Number
Numbers.java:126 clojure.lang.Numbers.add
.../git/sicp-clojure/src/sicp_clojure/2_1_exercises_2_24_2_32.clj:185 sicp-clojure.2-1-exercises-2-24-2-32/total-weight*[fn]
core.clj:5801 clojure.core/trampoline
core.clj:5806 clojure.core/trampoline
RestFn.java:439 clojure.lang.RestFn.invoke
.../git/sicp-clojure/src/sicp_clojure/2_1_exercises_2_24_2_32.clj:186 sicp-clojure.2-1-exercises-2-24-2-32/total-weight*

The code using trampoline, following the excellent link, is:

(defn total-weight* [mobile]
  (letfn 
    [(branch-weight-cps
      [branch kont]
      (let [structure (branch-structure branch)]
        (if (mobile? (branch-structure branch))
          (do (println "then " structure) (kont (traverse-mobile-cps structure identity)))
          (do (println "else " structure) (kont structure)))))

     (traverse-mobile-cps
      [mobile kont]
      (branch-weight-cps (left-branch mobile)
                         (fn [left-weight]
                           (branch-weight-cps (right-branch mobile)
                                              (fn [right-weight] #(kont (+ left-weight right-weight)))))))]
    (trampoline traverse-mobile-cps mobile identity)))

And finally some sample data:

(def branch11 (make-branch 1 1))
(def branch22 (make-branch 2 2))
(def branch36 (make-branch 3 6))
(def branch43 (make-branch 4 3))

(def mobile11-43 (make-mobile branch11 branch43))
(def mobile36-22 (make-mobile branch36 branch22))

(def branch5m1143 (make-branch 5 mobile11-43))
(def branch7m3622 (make-branch 7 mobile36-22))
(def mobile5m1143-7m3622 (make-mobile branch5m1143 branch7m3622))

(total-weight* mobile5m1143-7m3622)

Why does it blow up?

by Andrea Richiardi at October 20, 2014 11:17 AM

CompsciOverflow

string similarity disjointness threshold theory [on hold]

I am searching around (for the past month!!) to find the theoretical background for the so-called disjointness between strings especially for stings with k-mismatch. There are hundreds of papers (and algorithms) finding strings that have a GIVEN DISTANCE d, or K GIVEN mismatches. Well, I want to read anything about the distance d and/or the k. Why and when they are big enough ? When we can consider those two strings disjoint? When can I safely say (prove?) that those two (given) strings are not "related" (They are "strangers")? This would be probably because the search space from the one string to the other is too big (My problem is how to theoretically determine this "big").

I am not trying to solve any university/college essay. This is part of my research and I would appreciate it if someone pointed at me where to look or to begin with (e.g. relative entropy, string distances or anything else....) THanks.....

by George at October 20, 2014 11:16 AM

Patent-free algorithm for labeling a MRF/CRF

At the moment I'm implementing a video segmentation algorithm for moved foreground / background. In the literature often a graph-cut algorithm is used, which is patented by US6973212.

Are there any alternatives to assign labels to nodes of a Markov-Random-Field or Conditional-Random-Field, which don't use patented algorithms, and are relatively efficient?

(Please give me a hint, whether I'm on the right stackexchange site)

by Thomas Rebele at October 20, 2014 11:07 AM

Fefe

Früher mussten sich fiese Cyberterroristen noch überlegen, ...

Früher mussten sich fiese Cyberterroristen noch überlegen, wie sie ihre Malware bei den Firmen unterbringen, die sie angreifen wollen. Sie mussten Aufklärung betreiben, gerne besuchte Webseiten heraussuchen, und dann dort einen Exploit positionieren.

Heute sind die Werbenetzwerke so filigran konfigurierbar, dass man seine Malware dort hochlädt und "für Invincea" anklickt.

October 20, 2014 11:01 AM

StackOverflow

json4s object extraction with extra data

I'm using spray with json4s, and I've got the implementation below to handle put requests for updating objects... My problem with it, is that I first extract an instance of SomeObject from the json, but being a RESTful api, I want the ID to be specified in the URL. So then I must somehow create another instance of SomeObject that is indexed with the ID... to do this, I'm using a constructor like SomeObject(id: Long, obj: SomeObject). It works well enough, but the implementation is ugly and it feels inefficient. What can I do so I can somehow stick the ID in there so that I'm only creating one instance of SomeObject?

class ApplicationRouter extends BaseRouter {
  val routes =
    pathPrefix("some-path") {
      path("destination-resource" \ IntNumber) { id =>
        entity(as[JObject]) { rawData =>
          val extractedObject = rawData.camelizeKeys.extract[SomeObject]
          val extractedObjectWithId = SomeObject(id, extractedObject)
          handleRequest(extractedObjectWithId)
        }
      }
    }
}

case class SomeObject(id: Long, data: String, someValue: Double, someDate: DateTime) {
  def this(data: String, someValue: Double, someDate: DateTime) = this(0, data, someValue, someDate)
  def this(id: Long, obj: SomeObject) = this(id, obj.data, obj.someValue, obj.someDate)
}

by JBarber at October 20, 2014 10:58 AM

Fred Wilson

The Personal Cloud

Benedict Evans coined the term “personal cloud” in his writeup of WWDC in June. He said:

what you might call the personal cloud – the Bluetooth LE/Wifi mesh around you (such as HealthKit or HomeKit)

I like to think about what’s next.

Paul Graham said, “If you think of technology as something that’s spreading like a sort of fractal stain, almost every point on the edge represents an interesting problem.”

And in that context, the personal cloud is a particularly interesting “point on the edge” to me. It includes the following things:

1) NFC and other technologies that will turn the mobile phone into your next credit card

2) Phone to phone mesh networking like we saw with Fire Chat in Hong Kong a few weeks ago

3) Wearables like the watch, necklace, and earbud

4) Personal health data recording (HealthKit) in which your phone has a real time and historical chart of your heartbeat, blood chemistry, blood pressure, pulse, temperature, and much more.

5) Airplay and Chromecast and other technologies that will turn the mobile phone into both the next settop box and remote

I could probably go on and list another five things that fit into the personal cloud, but I will stop there.

If the first wave of the mobile phone’s impact on the tech sector was driven by applications running on the phone, the second wave will be driven by the phone connecting to other devices, including other phones.

I am particularly fascinated about what happens when our phones connect to other phones in dense environments and form meshes that don’t need the traditional Internet connectivity to power them. Mesh networks don’t just solve the problem of lack of traditional connectivity (Hong Kong), they also produce a solution to the last mile connectivity duopoly in wireline and oligopoly in wireless. In the future we may just opt out of those non-competitive markets and opt into a local mesh to get us to the Internet backbone, both in our homes and when we are out and about.

And phone to phone meshes form local “geofenced” networks that are interesting in their own right. A nice example of this is the peek feature in Yik Yak where you can see the timeline at various universities around the US. These Yik Yak peeks are not powered by mesh networking, they are just using the geolocation feature on the phone. But they could be a collection of mesh networks operating in various universities around the country. And so that example is enlightening to me.

I wanted to end this post with an image of a person walking down the street surrounded by their personal cloud and all the devices that are connected to it. But a quick image search did not produce it for me. That in and of itself is telling. That’s our future. But right now we are still in the imagining phase of it.

by Fred Wilson at October 20, 2014 10:42 AM

Planet Emacsen

Mickey Petersen: Four year anniversary and new website

Welcome to the new Mastering Emacs website. After four years (yes, four!) it’s time for a site refresh. The old site was never meant to last that long; it was a temporary theme hastily picked so I could start writing about Emacs. Back then there were fewer blogs and Emacs resources. We didn’t even have a package manager.

The old site did serve its purpose as a launchpad for my blogging adventures. But Apache/WordPress is slow out of the box, even with SuperCache. A slight breeze and the thing would fall over — and it did, every time it featured on HackerNews or high-traffic subreddits.

Eventually I moved to FastCGI and nginx to host WordPress, but as it’s not officially supported it was a major pain to get working. António P. P. Almeida’s wordpress-nginx made my life so much easier and the site so much faster.

Alas, it’s time to retire the old site. Over the years I came to a number of conclusions:

People don’t use tags I spent a good amount of time adding tags to every article I wrote, but almost no one ever really used them. Sure people did click on them, but overall the reading guide proved far more useful. My goal is to re-implement a “tag”-like system but around concepts (Shells, Dired, etc.) instead of tags.

Not enough categories I had categories like “For Beginners”, “Tutorials”, and so on. They worked OK, but I am of the opinion now that manually curating my content makes more sense. Automatic content generation’s fine but throwing articles into a two or three baskets is never good enough.

Spammers are getting smarter I had to ditch Akismet, a free anti-spam checker for WordPress, after several years of near-perfect operation. The spammers simply mimicked humans too much and the filter would trip up on real content. I eventually switched to manual approval but that’s a lot of work.

Encourage visitors to read other articles A lot of visitors would leave after reading a single article, even though I would often have several related articles. I tried some of the “Suggested Content” plugins but they were universally terrible — another mark again content automation.

Apache is a memory hog Yes, yes. I am sure you can tame Apache and make it into a lithe and agile webserver but my best efforts failed me. The second I switched to nginx the memory and CPU usage dropped like a rock. Not to mention that nginx is much easier to configure.

So what about the new site then? Well it’s custom written for the job, though I may one day open source the blog engine. I launched it Tuesday the 14th of October, and immediately my site got slammed by reddit, Twitter and Hackernews on the announcement of Emacs 24.4. Talk about baptism by fire! The site held up just fine though.

The stack is Python and Flask running PostgreSQL with nginx as a reverse proxy and uWSGI as the application server, and with memcached for page caching. It took about three weeks of casual coding to write it, including the harrowing experience of having to convert the old blog articles — but more on that in a bit.

I opted for Memcached over Redis as my needs were simple, and because nginx ships with memcached support meaning nginx could short-circuit the trip to my upstream application server should the need ever arise. For now it just goes to uWSGI which checks the cache and returns the cached copy. That’s actually more than quick enough to survive HackerNews, the most high-traffic site visits I’ve gotten.

The slowness comes from page generation and not querying the database (databases are fast, Python is not) so that’s where memcached comes in. I thought about using nginx’s own proxy cache mechanism but invalidating the cache when you add a new comment or when I edit a page is messy.

Converting the blog articles proved a greater challenge than you might think. First of all, I like reStructuredText so I wanted to write and edit my articles in rST and convert them automatically to HTML when I publish them.

Enter Pandoc, which is a fine tool for the job. But there’s a snag. The original WordPress format is pseudo-HTML, meaning blank lines signify new paragraphs. Converting that without spending too much time with a hand-rolled, one-off state machine to convert to “real HTML” (for Pandoc to convert to rST) involved some compromises and hand editing. (And no, wrapping text blocks in paragraph tags is not enough when you have <pre> tags with newlines and other tag flotsam.)

So that was painful.

Coming up with a new design proved a fun challenge as well. CSS has come a long way in four years and things like text-justified automatic hyphenation work great (unless you’re on Chrome, in which case it’s the dark ages for you) on both Firefox and IE. Drop caps, ligatures, kerning and old-style numerals also work well and is possible in CSS alone. I’m surprised how good HTML/CSS is at typesetting nowadays. The font is Cardo, an open source font inspired by Monotype’s Bembo, a font itself inspired by Aldus Manutius’ from the 1500s, which I originally wanted to use but it’s way, way, WAY too expensive for web font use. If you’re a Chrome user on Windows the font will look weird as Chrome does not see fit to grace your eyes with aliasing. Again, both Firefox and IE render properly.

I opted for larger font sizes than normal in the belief that: it’s not the 1990s any more, and big font sizes mean people won’t have to zoom in or squint their eyes. Or at least that’s what I always end up doing, and my vision’s perfectly fine. Apparently doing that was a mistake: the amount of vitriol I received from certain quarters of the internet for having large font sizes was… perplexing to say the least.

So I made the fonts smaller.

The site’s still undergoing changes and I plan on adding to it over time. I am particularly keen on getting people to explore my site and learn more about Emacs.

Here’s to another four years.

Mickey.

by Mickey Petersen at October 20, 2014 10:41 AM

/r/emacs

Commenting in Emacs

I am very new to Emacs (about a week) and want to comment code. There is a "comment-dwim" function, but most of the time it doesn't do what I mean. I would like to rewrite it, but I need a bit help. Here is what I want my function to do:

  • If (region is selected),((if (everything is commented), (uncomment everything in the region), else, (comment every uncommented line in the region (so it is commented once)))
  • elseif (line is empty) (do comment it)
  • elseif (Courser is at the end of line) (do a comment at the end of the line)
  • else: (Comment or uncomment this line)

Could you give me a hand? My attempt so far is here: http://pastebin.com/cDNXaSAg

submitted by Kaligule
[link] [21 comments]

October 20, 2014 10:40 AM

TheoryOverflow

Ackermann Function Time Complexity

Are there any known problems that have an Ackermann function time complexity lower bound?

by Tony Johnson at October 20, 2014 10:40 AM

QuantOverflow

What are the main flaws behind Ross Recovery Theorem?

Stephen Ross’ new paper claims that it is possible to separate risk aversions and historical probabilities if the Stochastic Discount Factor is transition independent using Perron-Frobenius Theorem. Carr and Yu have extended the model to a preference free setting with bounded stochastic processes. But a recent paper by Hansen, Borovicka and Scheinkman seems to show that the approach by Ross is misspecified.

Can you explain why Ross’ recovery is misspecified?

by franic at October 20, 2014 10:13 AM

StackOverflow

How to create package object which will be available to all the inner packages in scala?

I have package structure like this.

In file A/B/package.scala

package A
package object B {
  def foo = "Hello world"
}

In file A/B/xyz.scala

package A.B
object bar {
  def baz() {
    println(foo())
  }
} 

This won't throw error. It will work as expected. But if I try to use like:

In file A/B/C/biz.scala

package A.B.C
object biz {
  def baz() {
    println(foo())
  }
}

It will throw error as foo is not in the scope of inner package. I need to have global access to foo(). How can I achieve it?

One way is to import A.B like import A.B._.

But it will import all the classes in A.B package which I don't want to do. Is there any other way to achieve the same?

by Jeeva at October 20, 2014 10:02 AM

Bad Symbolic reference to reactivemongo.api.collections.GenericHandlers encountered in class file 'JSONGenericHandlers.class'

I'm having my apis in play 2.3 with reactive mongo. Recently, i tried to cleaned the project and during the process, some things got updated. Later, when i tried to run or compile that, i'm getting these errors. Apart from clean, i didn't do anything. Kindly help me.

[info] Compiling 48 Scala sources and 1 Java source to /home/Ruthvick/zcapi/zceapi    /target  /scala-2.11/classes...
[error] bad symbolic reference to reactivemongo.api.collections.GenericHandlers encountered in class file 'JSONGenericHandlers.class'.
 [error] Cannot access type GenericHandlers in package reactivemongo.api.collections. The current classpath may be
[error] missing a definition for reactivemongo.api.collections.GenericHandlers, or JSONGenericHandlers.class may have been compiled against a version that's
[error] incompatible with the one found on the current classpath.
[error] /home/Ruthvick/zcapi/zceapi/app/controllers/Application.scala:28: type arguments [play.modules.reactivemongo.json.collection.JSONCollection] do not conform to method collection's type parameter bounds [C <: reactivemongo.api.Collection]
[error]     def collection: JSONCollection = db.collection[JSONCollection]("shoppage")
[error]                                                   ^
[error] /home/Ruthvick/zcapi/zceapi/app/controllers/Application.scala:47: could not find implicit value for parameter writer: GenericCollection.this.pack.Writer[play.api.libs.json.JsObject]
[error]             collection.insert(result).map { lastError =>
[error]                              ^

[error] 60 errors found
[error] (compile:compile) Compilation failed
[error] application - 

Thanks,

by user3777846 at October 20, 2014 09:39 AM

/r/netsec

Planet Clojure

Pre-Conj Interview: Ashton Kemerling

Ashton Kemerling interview about generative testing.

<λ>

by LispCast at October 20, 2014 09:36 AM

StackOverflow

How to convert list of list to simple list by removing duplicate values using scala?

I have following list -

List(List(
List(((groupName,group1),(tagMember,["192.168.20.30","192.168.20.20","192.168.20.21"]))), 
List(((groupName,group1),(tagMember,["192.168.20.30"]))),
List(((groupName,group1),(tagMember,["192.168.20.30","192.168.20.20"])))))

I want to convert it to -

List((groupName, group1),(tagMember,["192.168.20.30","192.168.20.20","192.168.20.21"]))

I tried to use .flatten but unable to form desired output.

How do I get above mentioned output using scala??

by Vishwas at October 20, 2014 09:29 AM

CompsciOverflow

Does a graph always have a minimum spanning tree that is binary?

I have a graph and I need to find a minimum spanning tree to a given graph. What is to be done so that the output obtained is a binary tree?

by Aditya.M at October 20, 2014 09:28 AM

StackOverflow

Canonical way to define and execute a method that outputs Unit?

I have a list of methods (functions) that output Unit:

var fns:List[() => Unit] = Nil
def add(fn:() => Unit) = fns :+= fn      // a method to add to the list

I want to add println("hello") to the list.

add(() => println("hello"))  

Is there a better way than using the ugly parenthesis.

I would have preferred:

add (println("hello"))  // error here 

def myCoolMethod = {
   // do something cool
   // may return something, not necessarily Unit
}
add (myCoolMethod) // error here

I tried var fns:List[_ => Unit] and var fns:List[Any => Unit], fns:List[() => Any], etc without getting what I want.

Second question is how do I execute the methods in the list when I want to. I got it to work with:

fns foreach (_.apply) 

Is there a better way?

by Jus12 at October 20, 2014 09:10 AM

DataTau

/r/compsci

Communication in Teams - Journals/Books

Is there any Journals or books that touch on the topic of communication in teams? The pro's of good communication and cons of negative communication? Preferably with a slant towards compsci/software engineering?

submitted by XiiMoss
[link] [1 comment]

October 20, 2014 08:59 AM

StackOverflow

How to swap 2 elements in Vector of Vector

How do I extend this: What is the idiomatic way to swap two elements in a vector

to essentially a 2D array?

[[1 2 3] [4 5 6] [7 8 9]] --> [[1 2 5] [4 3 6] [7 8 9]]

by user1639926 at October 20, 2014 08:44 AM

What are the core concepts in functional programming?

In object-oriented programming, we might say the core concepts are:

  1. encapsulation
  2. inheritance,
  3. polymorphism

What would that be in functional programming?

by pierr at October 20, 2014 08:43 AM

How to populate placeholders in XML template with Scala?

I have the following XML template and using Scala XML I'd like to have those placeholders properly populated but I can't find the API for this. Can anyone advice whether this is possible at all?

<?xml version="1.0" encoding="UTF-8" ?>
<testsuite failures={failures} time={time} errors={errors} skipped="0" tests={tests} name="k4unit">
    <properties />
</testsuite>

The Scala code so far:

scala> import scala.xml.XML
import scala.xml.XML

scala> val xml = XML.loadFile("./testcases/TEST-k4unit-template.xml")
xml: scala.xml.Elem = 
<testsuite name="k4unit" tests={tests} skipped="0" errors={errors} time={time} failures={failures}>
  <properties/>
</testsuite>

I'd like to have the attributes tests, errors, time and failures populated with dynamic values.

by Giovanni Azua at October 20, 2014 08:41 AM

TheoryOverflow

three address code for matrix multiplication

Can somebody please give me the 3 address code for the following matrix multiplication:

for (i=1 to n) do
for (j=1 to n) do
c[i,j]=0;
for(i=1 to n) do
for(j=1 to n) do
for (k=1 to n) do
c[i,j]=c[i,j]+ a[i,k]* b[k,j]

by ishan at October 20, 2014 08:36 AM

StackOverflow

How to write a Play JSON writes converter for a case class with a single nullable member

In Play 2.3, I have a case class with a single optional double member:

case class SomeClass(foo: Option[Double])

I need a JSON write converter that handles the member as nullable:

implicit val someClassWrite: Writes[SomeClass] = ???

The Play docs provide an example:

case class DisplayName(name:String)
implicit val displayNameWrite: Writes[DisplayName] = Writes {
  (displayName: DisplayName) => JsString(displayName.name)
}

But sadly I can't figure out how to do this for 1) a single nullable and 2) a double. Any ideas? Thanks.

Update #1: The only solution I can come up with is this:

implicit val someClassWrite: Writes[SomeClass] = Writes {
  (someClass: SomeClass) => someClass.foo match {
    case Some(f) => JsNumber(BigDecimal(f))
    case _ => JsNull
}

Update #2: Ignore my solution. Travis Brown's is the one.

by Lasf at October 20, 2014 08:29 AM