Planet Primates

July 02, 2015

hubertf's NetBSD blog

NetBSD on NVIDIA Jetson TK1 (Tegra K1)

I'm losing count of all those spiffy ARM systems that NetBSD runs on, but here's a fancy one to take note. Quoting shamelessly from Jared McNeill's email, ``Jetson TK1 is a Tegra K1 (quad core Cortex-A15) development board from NVIDIA. Possibly the fastest ARM board that NetBSD supports now.

Still working on HDMI, and need to track down an issue that prevents eMMC from working (but the SD card slot works fine). The board is also capable of USB3 but for now we have routed the USB ports to the USB2 controller.''

Besides being a nice ARM platform, it should not go unnoticed that besides the quad-core ARM Cortex-A15 CPU, the system also has a NVIDIA Kepler GPU with a whopping 192 CUDA cores. Any takers to get those going with NetBSD?

More information is available on the NVIDIA Jetson TK1 page and NVIDIA's embedded computing developers' site . Also, check out the information on how to buy a Jetson TK1 DevKit. Now who's first to upload their dmesg? :-)

July 02, 2015 11:33 PM

BSD dmesg collection service

I'm always into serious dmesg pr0n, and apparently there's a hub collecting those that I've missed so far, ran by the fine folks from NYCBUG: dmesgd.nycbug.org.

By default it lists the latest submissions of a number of BSD variants, but it's also possible to just get everyone's favourite easily.

To submit a dmesg of NetBSD running on your favourite hardware (or software / virtualization, of course), there is a webpage to add your own dmesg. Also, statistics are available. Check it out!

July 02, 2015 11:10 PM

StackOverflow

scala : no common ancestor available for asInstanceOf[]

I created some case classes used as messages in akka. When the program receives some messages, it calls the method asInstanceOf[], but I don't know what to put inside the brackets, as the message can be one of the 3 differents case classes.

here is a try :

abstract class Retour
case class Naissance(val sender : ActorRef) extends Retour
case class NaissanceEtMort(val sender : ActorRef) extends Retour
case class PlusUnMoisOK(val sender : ActorRef) extends Retour

but at the execution of my program I get this error message:

lapins.Clapier$PlusUnMoisOK$ cannot be cast to lapins.Clapier$Retour   

can you help me?

EDIT:

the line of the error is those with [Retour] in :

val future = ask(couple,PlusUnMois)
val result = Await.result(future,timeout.duration).asInstanceOf[Retour]

and the error is below:

[ERROR] [07/02/2015 01:33:48.436] [clapier-akka.actor.default-dispatcher-5] [akka://clapier/user/$a] lapins.Clapier$PlusUnMoisOK$ cannot be cast to lapins.Clapier$Retour
akka.actor.ActorInitializationException: exception during creation

EDIT2:

resolved! the message sending was incorrectly written : indeed, I wrote "sender ! PlusUnMoisOK" instead of "sender ! PlusUnMoisOK(self)"

by lolveley at July 02, 2015 10:59 PM

hubertf's NetBSD blog

New binary releases for NetBSD on Raspberry Pi, including 7.0 RC1

NetBSD runs on many machnes, and the Raspberry Pi is one of them. Getting the stock distribution is not that easy, and to help in getting things going, Jun Ebihara is providing ready-made images for quite some time.

There are images available that are based on the latest development snapshot, NetBSD-curent, and with the NetBSD 7.0 release around the corner, there is also an image based on NetBSD 7.0 Release Candidat 1.

See the NetBSD wiki for many more details, and if you use your RPI for any cool hacks, be sure to let us know!

July 02, 2015 10:54 PM

Wes Felter

DragonFly BSD Digest

BSDNow 096: Lost Technology

BSDNow 096 has the usual new links, even more BSDCan 2015 video links, and an interview with Jun Ebihara about some of NetBSD’s lesser-known architectures.

(I like trying to guess the interview subject from each week’s obscure title; I was going to guess RetroBSD…  which would make a good topic to explore.)

by Justin Sherrill at July 02, 2015 10:51 PM

StackOverflow

Elegant implementation of n-dimensional matrix multiplication using lists?

List functions allow us to implement arbitrarily-dimensional vector math quite elegantly. For example:

on   = (.) . (.)
add  = zipWith (+)
sub  = zipWith (-)
mul  = zipWith (*)
dist = len `on` sub
dot  = sum `on` mul
len  = sqrt . join dot

And so on.

main = print $ add [1,2,3] [1,1,1] -- [2,3,4]
main = print $ len [1,1,1]         -- 1.7320508075688772
main = print $ dot [2,0,0] [2,0,0] -- 4

Of course, this is not the most efficient solution, but is insightful to look at, as one can say map, zipWith and such generalize those vector operations. There is one function I couldn't implement elegantly, though - that is cross products. Since a possible n-dimensional generalization of cross products is the nd matrix multiplication, how can I implement matrix multiplication elegantly?

by Viclib at July 02, 2015 10:46 PM

How do you mixin functionality to each step of an iterative procedure with Scala?

I am working on an optimization procedure in Scala and am looking for advice on how to structure my problem. The procedure takes one step at a time, so I've naively modelled the problem using a Step class:

class Step(val state:State) {
  def doSomething = ...
  def doSomethingElse = ...
  def next:Step = ... // produces the next step in the procedure
}

Each step in the procedure is represented by the immutable Step class whose constructor is given the state produced by the previous step and whose next method produces a subsequent Step instance. The basic idea is then to wrap this with Iterator[Step] so steps can be taken until the optimization converges. Although a bit simplistic, this works well for the vanilla case.

Now, however, I need to add various extensions to the algorithm, and I need to arbitrarily mixin these extensions depending on the problem being optimized. Normally this would be accomplished with the stackable trait pattern but this approach poses a couple of issues for this problem. Here is an example of a would-be extension:

trait SpecialStep extends Step {
  // Extension-specific state passed from step to step
  val specialState:SpecialState = ...

  // Wrap base methods to extend functionality
  abstract override def doSomething = { ...; super.doSomething(); ... }
}

The main issue is that the next method of the base class doesn't know which extensions have been mixed in, so subsequently produced steps will not incorporate any extensions mixed into the initial one.

Also, each extension may need to pass its own state from step to step. In this example a SpecialState instance is included in the mixin, but without overriding the next method, SpecialStep has no way to pass along it's SpecialState, and overriding next cannot be done since there may be multiple extensions mixed in, each only aware of itself.

So it seems I've painted myself into a corner here and am hoping someone has some insight for how to approach this with Scala. Which design pattern is most appropriate for this type of problem?

by weston at July 02, 2015 10:39 PM

Wes Felter

QuantOverflow

How to calculate implied volatility smile of basket using correlations?

For a basket, the realized volatility can be calculated using:

$$\sqrt{\sigma_1^2 + \sigma_2^2 + 2 \sigma_1 \sigma_2 \rho}$$

If I have the volatility surface of two underlyings S1,S2 (strike space).

And for each point I calculate the vols using above formula, how accurate is the approximation? I can extend this to multiple assets using simple cholesky transformation.

Correlation used is historical correlation, and not implied correlation.

by user139258 at July 02, 2015 10:35 PM

StackOverflow

Difference between returning Future.failed(Exception) and throwing an Exception

In Scala, what's the difference between returning Future.failed(new Exception("message!")) and throw new Exception("message!")?

Let's say this is happening in a function that is to return Future[Unit], and the calling function is like so:

someFunction onFailure {
  case ex: Exception => log("Some exception was thrown")
}

Is there a preference of one over the other or a specific use case for each?

by jnfr at July 02, 2015 10:22 PM

Lobsters

StackOverflow

Handling extra newlines in csv files parsed with Scala?

I'm totally new to Scala, and am trying to parse a CSV file that has carriage return/new line/and other special characters like comma in some of the cells (i.e. within double quotations), for example:

"A","B","C\n,FF\n","D"\n
"Q","W","E","R\n\n"\n
"1","2\n","2","2,2\n"\n

I want to load this into a list of lists type in Scala, like the following:

List(List("A","B","C,FF","D"),List("Q","W","E","R"),List("1","2","2","2,2"))

Any suggestions how it can be done?

I have found some solutions for the same problem in other languages. For example this is a great one in Python, which I understand well: Handling extra newlines (carriage returns) in csv files parsed with Python?

My try:

val src2 = Source.fromFile("sourceFileName.csv")
val it =src2.getLines()
val data = for (i<-it) yield i.replace("\"","").split(",")

But it looks like all carriage returns are seen as new lines.

by Alt at July 02, 2015 10:20 PM

Ansible template only if file doesn't exist

I am try to only setup templates if the file doesn't exist. I currently have the following which always creates the files.

- name: Setup templates
  template: src={{ item }} dest={{ item | basename | regex_replace('\.j2','') }}
  with_fileglob: ../templates/*.j2

I've seen methods to register a variable with the state command but haven't figured out how to do that with the with_fileglob.

by Joshua at July 02, 2015 10:20 PM

/r/emacs

slime-eval and namespace problems (X-post /r/Common_Lisp)

Using "slime-eval-region" on the following works:

(+ 1 2)

However evaluating the following with C-x C-e in emacs lisp mode

(insert (number-to-string (slime-eval '(+ 1 2))))

yields the SLIME error:

The function SWANK-IO-PACKAGE::+ is undefined.

Adding the "cl:+" namespace token:

(insert (number-to-string (slime-eval '(cl:+ 1 2))))

works, but how can I accomplish the former without having to qualify the namespace or some hacky macro workaround? What is the best way to bind common lisp evaluations in emacs lisp?

submitted by nikonikolai
[link] [comment]

July 02, 2015 10:11 PM

StackOverflow

control ansible task file execution

My current Ansible project is setup like so:

backup-gitlab.yml roles/ aws_backups/ tasks/ main.yml backup-vm.yml gitlab/ tasks/ main.yml start.yml stop.yml

backup-gitlab.yml needs to do the following:

  1. Invoke stop.yml on the gitlab host.
  2. Invoke backup-gitlab.yml on a different host.
  3. Invoke start.yml on the gitlab host.

The problem I'm running into is Ansible doesn't seem to support a way of choosing which task files to run within the same role in the same playbook. Before I was using tags to control what Ansible would do, but in this case tagging the include statements for start.yml and stop.yml doesn't work because Ansible doesn't appear to have a way to dynamically change the applied tags that are run once they are set through the command line.

I can't come up with an elegant way to achieve this.

Some options are:

  1. Have each task file be contained within its own role. This is annoying because I will end up with a million roles that are not grouped in any way. It's essentially abandoning the whole 'role' concept.
  2. Use include with hard coded paths. This is prone to error as things move around. Also, since Ansible deprecated combining with_items with include (or using any sort of dynamic looping with include), I can no longer quickly change up the task files being run. Any minor change in my workflow requires lots of coding changes. I would really like to stick with using tags from the command line to control exactly what Ansible does.
  3. Use shell scripts to invoke separate Ansible playbooks.
  4. Use conditionals (when clause) on every single Ansible action, and control what gets run by setting variables. While several people have recommended this on SO, it sounds awful. I will have to add the conditional to hundreds of actions and every time I run a playbook the output will be cluttered by hundred's of 'skip' statements.
  5. Leverage Jinja templates and ansible's local_connection to dynamically build static main.yml files with all the required task files included in the proper order (using computed relative paths). Then invoke that computed main.yml file. This is dangerous and convoluted.
  6. Use top level Ansible plays to invoke lower level plays. Seems messy, also this brings in problems when I need to pass variables between plays. Using Ansible's Python Api may help this.

Ansible strives to bring VMs into idempotent states but this isn't very helpful and is a dated way of thinking in my opinion (I would have stuck with Chef if that is all I wanted). I want to leverage Ansible to actually do things such as: actively change configuration states, kick off processes, monitor events, react to events, etc. Essentially I want it to automate as much of my job as possible. The current 'role' structure (with static configurations) that Ansible recommends doesn't fit this paradigm very well even though their usage of remote command execution via SSH gets us so close to the dream.

by alex at July 02, 2015 10:07 PM

Extract all root-to-leaf paths in a general tree in Scala

My tree object:

    private[package] object Tree {

      /** A node in a  Tree. */
      class Node[T](val parent: Node[T]) extends Serializable {
        var item: T = _
        var count: Long = 0L
        val children: mutable.Map[T, Node[T]] = mutable.Map.empty

        def isRoot: Boolean = parent == null
      }

      /** Attribute in a tree */
      private class Attribute [T] extends Serializable {
        var count: Long = 0L
        val nodes: ListBuffer[Node[T]] = ListBuffer.empty
      }
    }

And the class:

    private[package] class Tree[T] extends Serializable {
      import Tree._
      val root: Node[T] = new Node(null)
      private val attributes: mutable.Map[T, Attribute[T]] = mutable.Map.empty

    def extract(
       minCount: Long,
       validateSuffix: T => Boolean = _ => true): Iterator[(List[T], Long)] = {
          //missing code
      }

Function extract must produce an Iterator[List[T]] which includes paths root-to-leaf. The path is valid if the count of each node is more than minCount.

EDIT: this is my try:

      def extract(minCount: Long, validateSuffix: T => Boolean = _ => true): Iterator[(List[T], Long)] = {

        def traverse (node: Node[T], path: List[T]): Iterator[(List[T], Long)] = {
          path.::(node.item)
          node.children.foreach{ case (child:Node[T]) =>
            traverse(child, path)
          } ++ {
            if (node.children.isEmpty && node.count>=minCount){
              Iterator.single((path, node.count))
            } else {
              Iterator.empty
            }
          }
        }
        traverse (root, List.empty)
      }

by user5069994 at July 02, 2015 10:01 PM

Apache Spark GraphX: operations performance?

I want to ask several questions about GraphX and Scala performance:

  1. How fast are transformations? I know that these operations are lazy and won't computed immediately, but I just wonder how are they actually work under the hood?
  2. For example purpose: let's say that I have basic var testGraph: Graph object, which is the representation of complete graph, computed and cached in cluster memory.
    • What will happen, if I will do this: testGraph = testGraph.someTransformation.someAction?
    • I think that I should remove testGraph from cache, am I right?
    • How fast it will be?
    • How much memory it will take?
    • Is there any efficient way to add a new vertex to this testGraph (which will be binded for several random chosen vertices)? And what if testGraph will be large enough (more than million vertices)?

by SuppieRK at July 02, 2015 09:58 PM

Clojure - memoize on disk

I would like to improve the performance of a function that returns resized images. The requested size of the images should not vary a lot (device-dependant), so it would make sense to somehow cache the results.

I could of course store it on disk, and check if the resized image exists, and make sure that if the original image is deleted, the resized versions are too...

Or, I could use a memoized function. But since the result are potentially quite big (an image is about 5 - 10 MB I think), it doesn't make sense to store those in memory (several tens of GB of images and their modified versions would fill up the memory quite quickly).

So, is there a way to have a memoized function that acts like the regular Clojure defmemo, but is backed by a folder in a local disk instead of memory ? I could then use a ttlstrategy to make sure that images don't stay out of sync for too long.

Something similar to crache, but backed by a filesystem ?

by nha at July 02, 2015 09:57 PM

How to get a Subscriber and Publisher from a broadcasted Akka stream?

I'm having problems in getting Publishers and Subscribers out of my flows when using more complicated graphs. My goal is to provide an API of Publishers and Subscribers and run the Akka streaming internally. Here's my first try, which works just fine.

val subscriberSource = Source.subscriber[Boolean]
val someFunctionSink = Sink.foreach(Console.println)

val flow = subscriberSource.to(someFunctionSink)

//create Reactive Streams Subscriber
val subscriber: Subscriber[Boolean] = flow.run()

//prints true
Source.single(true).to(Sink(subscriber)).run()

But then with a more complicated broadcast graph, I'm unsure as how to get the Subscriber and Publisher objects out? Do I need a partial graph?

val subscriberSource = Source.subscriber[Boolean]
val someFunctionSink = Sink.foreach(Console.println)
val publisherSink = Sink.publisher[Boolean]

FlowGraph.closed() { implicit builder =>
  import FlowGraph.Implicits._

  val broadcast = builder.add(Broadcast[Boolean](2))

  subscriberSource ~> broadcast.in
  broadcast.out(0) ~> someFunctionSink
  broadcast.out(1) ~> publisherSink
}.run()

val subscriber: Subscriber[Boolean] = ???
val publisher: Publisher[Boolean] = ???

by ripla at July 02, 2015 09:51 PM

Dataframes to EdgeRDD (GraphX) using Scala api to Spark

Is there a nice way of going from a Spark DataFrame to an EdgeRDD without hardcoding types in the Scala code? The examples I've seen use case classes to define the type of the EdgeRDD.

Let's assume that our Spark DataFrame has StructField ("dstID", LongType, false) and ("srcID", LongType, false) and between 0 and 22 additional StructField (We are constraining this so that we can use a TupleN to represent them). Is there a clean way to define an EdgeRdd[TupleN] by grabbing the types from the DataFrame? As motivation, consider that we are loading a Parquet file that contains type information.

I'm very new to Spark and Scala, so I realize the question may be misguided. In this case, I'd appreciate learning the "correct" way of thinking about this problem.

by wzlwardance at July 02, 2015 09:45 PM

ansible permission denied, but ssh with key works

UPDATE: Thanks for the feedback guys. As expected, it turned out to be a bonehead mistake on my part. I was not specifying the correct username when I was running the ansible command. If you run ansible without any user (-u), then it will use your current system username. In my case that was swill when I wanted to run it as root.

This is now working:
ansible all -m ping -u root


I am pretty baffled at this point. I can ssh into the box using the public key without any problems, but when I try to use ansible to connect I get no love.

Note: I am using <ip_address> as the IP as to not expose my actual IP address...

My local /etc/ansible/hosts has the following

<ip_address>

I can ssh into the remote box using my default ssh key:

$ ssh root@<ip_address>

I setup ssh on my remote box using:

$ cat ~/.ssh/id_rsa.pub | ssh <user>@<ip_address> "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

The contents of my remote /etc/ssh/sshd_config file is:

# Package generated configuration file
# See the sshd_config(5) manpage for details

# What ports, IPs and protocols we listen for
Port 22
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes

# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 1024

# Logging
SyslogFacility AUTH
LogLevel INFO

# Authentication:
LoginGraceTime 120
PermitRootLogin yes
StrictModes yes

RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile %h/.ssh/authorized_keys

# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes

# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no

# Change to no to disable tunnelled clear text passwords
PasswordAuthentication no

# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes

# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes

X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
#UseLogin no

#MaxStartups 10:30:60
#Banner /etc/issue.net

# Allow client to pass locale environment variables
AcceptEnv LANG LC_*

Subsystem sftp /usr/lib/openssh/sftp-server

# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication.  Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM no

Here is what I am getting when I try to connect with ansible...

    swill:ansible swill$ ansible all -m ping -vvvv
<<ip_address>> ESTABLISH CONNECTION FOR USER: swill
<<ip_address>> REMOTE_MODULE ping
<<ip_address>> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/swill/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 <ip_address> /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1435804031.59-82833056078245 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1435804031.59-82833056078245 && echo $HOME/.ansible/tmp/ansible-tmp-1435804031.59-82833056078245'
<ip_address> | FAILED => SSH Error: Permission denied (publickey).
    while connecting to <ip_address>:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.

swill:ansible swill$ ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/swill/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 <ip_address> /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1435804031.59-82833056078245 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1435804031.59-82833056078245 && echo $HOME/.ansible/tmp/ansible-tmp-1435804031.59-82833056078245'
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/swill/.ssh/config
debug1: /Users/swill/.ssh/config line 4: Applying options for <ip_address>
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 53: Applying options for *
debug1: auto-mux: Trying existing master
debug1: Control socket "/Users/swill/.ansible/cp/ansible-ssh-<ip_address>-22-swill" does not exist
debug2: ssh_connect: needpriv 0
debug1: Connecting to <ip_address> [<ip_address>] port 22.
debug2: fd 3 setting O_NONBLOCK
debug1: fd 3 clearing O_NONBLOCK
debug1: Connection established.
debug3: timeout: 9958 ms remain after connect
debug3: Incorrect RSA1 identifier
debug3: Could not load "/Users/swill/.ssh/id_rsa" as a RSA1 public key
debug1: identity file /Users/swill/.ssh/id_rsa type 1
debug1: identity file /Users/swill/.ssh/id_rsa-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2
debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH*
debug2: fd 3 setting O_NONBLOCK
debug3: load_hostkeys: loading entries for host "<ip_address>" from file "/Users/swill/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/swill/.ssh/known_hosts:134
debug3: load_hostkeys: loaded 1 keys
debug3: order_hostkeyalgs: prefer hostkeyalgs: ssh-rsa-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-rsa
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa-cert-v01@openssh.com,ssh-rsa-cert-v00@openssh.com,ssh-rsa,ssh-dss-cert-v01@openssh.com,ssh-dss-cert-v00@openssh.com,ssh-dss
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none
debug2: kex_parse_kexinit: zlib@openssh.com,zlib,none
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: first_kex_follows 0 
debug2: kex_parse_kexinit: reserved 0 
debug2: kex_parse_kexinit: curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-gcm@openssh.com,aes256-gcm@openssh.com,chacha20-poly1305@openssh.com,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,rijndael-cbc@lysator.liu.se
debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: hmac-md5-etm@openssh.com,hmac-sha1-etm@openssh.com,umac-64-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,hmac-ripemd160-etm@openssh.com,hmac-sha1-96-etm@openssh.com,hmac-md5-96-etm@openssh.com,hmac-md5,hmac-sha1,umac-64@openssh.com,umac-128@openssh.com,hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,hmac-ripemd160@openssh.com,hmac-sha1-96,hmac-md5-96
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: none,zlib@openssh.com
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: 
debug2: kex_parse_kexinit: first_kex_follows 0 
debug2: kex_parse_kexinit: reserved 0 
debug2: mac_setup: found hmac-md5-etm@openssh.com
debug1: kex: server->client aes128-ctr hmac-md5-etm@openssh.com zlib@openssh.com
debug2: mac_setup: found hmac-md5-etm@openssh.com
debug1: kex: client->server aes128-ctr hmac-md5-etm@openssh.com zlib@openssh.com
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug2: dh_gen_key: priv key bits set: 129/256
debug2: bits set: 528/1024
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA 5c:c8:5f:d3:24:fe:ca:e2:05:03:12:d6:cb:42:d2:91
debug3: load_hostkeys: loading entries for host "<ip_address>" from file "/Users/swill/.ssh/known_hosts"
debug3: load_hostkeys: found key type RSA in file /Users/swill/.ssh/known_hosts:134
debug3: load_hostkeys: loaded 1 keys
debug1: Host '<ip_address>' is known and matches the RSA host key.
debug1: Found key in /Users/swill/.ssh/known_hosts:134
debug2: bits set: 505/1024
debug1: ssh_rsa_verify: signature correct
debug2: kex_derive_keys
debug2: set_newkeys: mode 1
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug2: set_newkeys: mode 0
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug2: key: /Users/swill/.ssh/id_rsa (0x7f8482500680), explicit
debug1: Authentications that can continue: publickey
debug3: start over, passed a different list publickey
debug3: preferred gssapi-with-mic,gssapi-keyex,hostbased,publickey
debug3: authmethod_lookup publickey
debug3: remaining preferred: ,gssapi-keyex,hostbased,publickey
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/swill/.ssh/id_rsa
debug3: send_pubkey_test
debug2: we sent a publickey packet, wait for reply
debug1: Authentications that can continue: publickey
debug2: we did not send a packet, disable method
debug1: No more authentication methods to try.
Permission denied (publickey).

I would appreciate it if someone could help me see why I am an idiot and can't figure this out. :) Thanks...

by swill at July 02, 2015 09:43 PM

CompsciOverflow

using Dynamic programming find the number of all increasing subsequences ends with xj in given sequence for every index j

I got this question today and I'm no where near the solution, Given a sequence of real numbers (X1, X2, ..,Xn). write an algorithm as efficient there is, that finds the number of strictly increasing sub-sequences for every index j, that end with Xj.

Write a recurrence formula that solves this problem in O(n^2) and a correctness proof.

by TCP_Explorer at July 02, 2015 09:41 PM

QuantOverflow

Regression model when samples are small and not correlated

I received this question during an onsite interview for a quant job and I'm still scratching my head on how to solve this problem. Any help would be appreciated.


Mr Quant thinks that there is a linear relationship between past and future intraday returns. So he would like to test this idea. For convenience, he decided to parameterize return in his data set using a regular time grid dt where $d=0, …, D-1$ labels date and $t=0, …, T-1$ intraday time period. For example, if we split day into 10 minute intervals then $T = 1440 / 10$. His model written on this time grid has the following form:

$y_{d,t}$ $=$ $\beta_t$ * $x_{d,t}$ + $\epsilon_{d,t}$

where $y_{d,t}$ is a return over the time interval $(t,t+1)$ and $x_{d,t}$ is a return over the previous time interval, $(t–1,t)$ at a given day $d$. In other words, he thinks that previous 10-minute return predicts future 10-minute return, but the coefficient between them might change intraday.

Of course, to fit $\beta_t$ he can use $T$ ordinary least square regressions, one for each “$t$”, but:

(a) his data set is fairly small $D$=300, $T$=100;

(b) he thinks that signal is very small, at best it has correlation with the target of 5%.

He hopes that some machine learning method that can combine regressions from nearby intraday times can help.

How would you solve this problem? Data provided is an $x$ matrix of predictors of size $300\times100$ and a $y$ matrix of targets of size $300\times100$.

by cogolesgas at July 02, 2015 09:35 PM

StackOverflow

NoClassDefFoundError for Kafka Producer Example

I am getting a NoClassDefFoundError when I try to compile and run the Producer example that comes with Kafka. I want to know how to resolve the error discussed below?

Caveat: I am a C++/C# programmer who is Linux literate and starting to learn Java. I can follow instructions, but may well ask for some clarification along the way.

I have a VM sandbox from Hortonworks that is running a Red Hat appliance. On it I have a working kafka server and by following this tutorial I am able to get the desired Producer posting messages to the server.

Now I want to get down to writing my own code, but first I decided to make sure I can compile the example files that Kafka came with After a day of trial and error I just cannot seem to get this going.

here is what I am doing:

I am going to the directory where the example files are located and typing:

javac -cp $KCORE:$KCLIENT:$SCORE:. ./*.java

$KCORE:$KCLIENT:$SCORE resolve to the jars for the kafka core, kafka-client, and scala libraries respectively. everything returns just fine with no errors and places all the class files in the current directory; however, when I follow up with

javac -cp $KCORE:$KCLIENT:$SCORE:. Producer

I get a NoClassDefFoundError telling me the following Error screen shot

The code for the class is

package kafka.examples;

import java.util.Properties;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class Producer extends Thread
{
  private final kafka.javaapi.producer.Producer<Integer, String> producer;
  private final String topic;
  private final Properties props = new Properties();


  public Producer(String topic)
  {
    props.put("serializer.class", "kafka.serializer.StringEncoder");
    props.put("metadata.broker.list", "localhost:9092");
    // Use random partitioner. Don't need the key type. Just set it to Integer.
    // The message is of type String.
    producer = new kafka.javaapi.producer.Producer<Integer, String>(new ProducerConfig(props));
    this.topic = topic;
  }

  public void run() {
    int messageNo = 1;
    while(true)
    {
      String messageStr = new String("Message_" + messageNo);
      producer.send(new KeyedMessage<Integer, String>(topic, messageStr));
      messageNo++;
    }
  }

}

Can anybody point me in the right direction to resolve this error? Do the classes need to go in different directories for some reason?

by Semicolons and Duct Tape at July 02, 2015 09:26 PM

/r/scala

StackOverflow

How to add library dependency to classpath of Build.scala?

I'm trying to use the sqlite-jdbc driver in my Build.scala to generate a sqlite db with some necessary tables before compilation. This is what I wrote to achieve that:

compile in Compile <<= (compile in Compile) map { result =>
  val sqliteDb = file("my_sqlite.db")
  if (!sqliteDb.exists()) {
    val connection = DriverManager.getConnection(s"jdbc:sqlite:${sqliteDb.getAbsolutePath}")
    val statement = connection.prepareStatement("create table EXAMPLE ( ... );")
    statement.execute()
    statement.close()
    connection.close()
  }
  result
}

That's all well and good, but when I run compile I get this error:

[error] (my-project/compile:compile) java.sql.SQLException: No suitable driver found for jdbc:sqlite:/Users/2rs2ts/src/my-project/my_sqlite.db

Now that's a bit frustrating since I thought I could add that dependency to Build.scala's classpath by creating a recursive project. My directory structure looks like this:

my-project/
  project/
    Build.scala
    build.sbt
    project/
      build.sbt

And my-project/project/project/build.sbt looks like this:

libraryDependencies += "org.xerial" % "sqlite-jdbc" % "3.8.10.1"

Edit: I also put that line in my-project/project/build.sbt and it did not resolve my issue.

So... what did I do wrong? I need this dependency on the classpath in order to get the sqlite driver to work.

by 2rs2ts at July 02, 2015 09:18 PM

Lobsters

CompsciOverflow

Find the subset of k element between n that maximize the total distance

Given a set $Q\subset \mathbb{N}^m $ of $n$ points, we want to find the subset $S_{max}\subset Q$ of $k$ elements that maximize the total distance between them, according to the $\ell^1$ norm.

$$S_{max} = \arg \max_S\sum_{i,j \in S, i \ne j} d(x_i,x_j)$$

In my specific case, $Q\subset \{ 0, 1 \} ^m $, thus $d(\cdot,\cdot)$ is equal to the Hamming distance.

Is there any efficient way to solve this problem? Is it possible to rewrite it in another simpler way?

by Barbus at July 02, 2015 09:04 PM

How to get the intersection of two regular languages?

I was trying to figure out the value of the following expression.

(0 | 1)*00 ∩ (0 | 1)*01

I tried drawing (0 | 1)*00 and (0 | 1)*01 separately and combine using a table.

Is it the approach I should use? or is there any better way?

by Dinal24 at July 02, 2015 09:03 PM

StackOverflow

Spark Streaming MQTT

I've been using spark to stream data from kafka and it's pretty easy.

I thought using the MQTT utils would also be easy, but it is not for some reason.

I'm trying to execute the following piece of code.

  val sparkConf = new SparkConf(true).setAppName("amqStream").setMaster("local")
  val ssc = new StreamingContext(sparkConf, Seconds(10))

  val actorSystem = ActorSystem()
  implicit val kafkaProducerActor = actorSystem.actorOf(Props[KafkaProducerActor])

  MQTTUtils.createStream(ssc, "tcp://localhost:1883", "AkkaTest")
    .foreachRDD { rdd =>
      println("got rdd: " + rdd.toString())
      rdd.foreach { msg =>
        println("got msg: " + msg)
      }
    }

  ssc.start()
  ssc.awaitTermination()

The weird thing is that spark logs the msg I sent in the console, but not my println.

It logs something like this:

19:38:18.803 [RecurringTimer - BlockGenerator] DEBUG o.a.s.s.receiver.BlockGenerator - Last element in input-0-1435790298600 is SOME MESSAGE

by Thiago Pereira at July 02, 2015 09:02 PM

How to break/escape from a for loop in Scala?

Im new to scala and searched a lot for the solution. I'm querying the database and storing the value of the http request parsed as a json4s object in response. I wait for the response and parse the json.

val refService = url("http://url//)
val response = Http(refService OK dispatch.as.json4s.Json)
var checkVal :Boolean = true
val json = Await.result(response, 30 seconds)

val data = json \ "data"

I want to run a loop and check if the value of "name" is present in the data returned. If present I want to break and assign checkVal to false. So far I have this:

for {
  JObject(obj) <- data
 JField("nameValue", JString(t)) <- obj //nameValue is the column name in the returned data
  } yield {checkVal= if (t == name){ break }
 else 
   true
  }

Eclipse is giving me the following error: type mismatch; found : List[Unit] required: List[String] Please advice. Thank you.

by pal at July 02, 2015 08:55 PM

QuantOverflow

Is probability implied by binary FX options risk neutral or real world?

If we consider binary FX options in the market and estimate the market implied probabilities of certain FX rates occurring, would these resulting probabilities be risk neutral or real world?

I hear the term "market implied probability" being used in the work place estimated from binary options, I am not sure if this relates to risk neutral or real world?

by Zakoff at July 02, 2015 08:55 PM

Pricing an American call under the CGMY model

I am pricing an American call under the CGMY model ($0 < Y < 1$) with strike $K$ at grid point $(x_i,\tau_j)$ where $x_i=x_{min}+i\,\Delta x $ for $i=0,1,...N$ and $\Delta x=\frac{x_{max}-x_{min}}{N}$.Why in the region $y\in(x_N-x_i,\infty)$ we have

\begin{align} \int_{x_N-x_i}^{\infty}(w(x_i+y,\tau_j)-w(x_i,\tau_j))\frac{exp(-\lambda_p) y}{\nu(x_i,\tau_j)\, y^{1+Y}}\,\,dy=0 \end{align}

Where $w(x_i,\tau_j)$ is the premium at $(x_i,\tau_j)$.

by Behrouz Maleki at July 02, 2015 08:52 PM

Lobsters

StackOverflow

extends Super in com.github.nscala_time.time.DurationBuilder

I recently downloaded source code of com.github.nscala_time package version 2.11, After set up dependency in Maven, I got lots of errors, I checked one file com.github.nscala_time.time.DurationBuilder, it got a line like:

class DurationBuilder(val underlying: Period) extends Super {..

There are no class or type named "Super" in the same package or imported packages. I am wondering scala has a type called "Super"? the Eclipse scala 2.11 compiler complains about cannot find type "Super"

by James at July 02, 2015 08:46 PM

Lobsters

/r/netsec

StackOverflow

Break out of loop in scala while iterating on list

I am trying to solve a problem.

Problem : You are given a sequence of N balls in 4 colors: red, green, yellow and blue. The sequence is full of colors if and only if all of the following conditions are true:

There are as many red balls as green balls. There are as many yellow balls as blue balls. Difference between the number of red balls and green balls in every prefix of the sequence is at most 1. Difference between the number of yellow balls and blue balls in every prefix of the sequence is at most 1. Your task is to write a program, which for a given sequence prints True if it is full of colors, otherwise it prints False.

My solution : for each string, i am generating all possible prefixes and suffixes to validate the condition number 3 and 4. But it is taking more time.

instead of generating prefix and validating conditions every time, we can iterate over the string and validate the condition. I want to break out of loop when condition is not met. I am not able to get that in functional style. Can someone help me how to achieve it.

My solution :

object Test {

    def main(args: Array[String]) {

      def isValidSequence(str: String) = {
        def isValidCondition(ch1:Char, ch2:Char, m:Map[Char, Int]):Boolean = m.getOrElse(ch1, 0) - m.getOrElse(ch2, 0) > 1
        def groupByChars(s:String) = s.groupBy(ch => ch).map(x => (x._1, x._2.length))
        def isValidPrefix(s:String):Boolean = (1 to s.length).exists(x => isValidCondition('R', 'G', groupByChars(s.take(x))))

        val x = groupByChars(str)
        lazy val cond1 = x.get('R') == x.get('G')
        lazy val cond2 = x.get('B') == x.get('Y')
        lazy val cond3 = isValidPrefix(str)
        lazy val cond4 = isValidPrefix(str.reverse)

        cond1 && cond2 && !cond3 && !cond4
      }
      def printBoolValue(b:Boolean) = if(b) println("True") else println("False")

      val in = io.Source.stdin.getLines()
      val inSize = in.take(1).next().toInt
      val strs = in.take(inSize)
      strs.map(isValidSequence(_)).foreach(printBoolValue)
    }
}

by jos at July 02, 2015 08:35 PM

CompsciOverflow

What techniques can I use to hand-write a parser for an ambiguous grammar?

I'm writing a compiler, and I've built a recursive-descent parser to handle the syntax analysis. I'd like to enhance the type system to support functions as a valid variable type, but I'm building a statically typed language, and my desired syntax for a function type renders the grammar temporarily* ambiguous until resolved. I'd rather not use a parser generator, though I know that Elkhound would be an option. I know I can alter the grammar to make it parse within a fixed number of steps, but I'm more interested in how to implement this by hand.

I've made a number of attempts at figuring out the high-level control flow, but every time I do this I end up forgetting a dimension of complexity, and my parser becomes impossible to use and maintain.

There are two layers of ambiguity: a statement can be an expression, a variable definition, or a function declaration, and the function declaration can have a complex return type.

Grammar subset, demonstrating the ambiguity:

basetype
  : TYPE
  | IDENTIFIER
  ;

type
  : basetype
  | basetype parameter_list
  ;

arguments
  : arraytype IDENTIFIER
  | arraytype IDENTIFIER COMMA arguments
  ;

argument_list
  : OPEN_PAREN CLOSE_PAREN
  | OPEN_PAREN arguments CLOSE_PAREN
  ;

parameters
  : arraytype
  | arraytype COMMA parameters
  ;

parameter_list
  : OPEN_PAREN CLOSE_PAREN
  | OPEN_PAREN parameters CLOSE_PAREN
  ;

expressions
  : expression
  | expression COMMA expressions
  ;

expression_list
  : OPEN_PAREN CLOSE_PAREN
  | OPEN_PAREN expressions CLOSE_PAREN
  ;

// just a type that can be an array (this language does not support
// multidimensional arrays)
arraytype:
  : type
  | type OPEN_BRACKET CLOSE_BRACKET
  ;

block
  : OPEN_BRACE CLOSE_BRACE
  | OPEN_BRACE statements CLOSE_BRACE
  ;

function_expression
  : arraytype argument_list block
  | arraytype IDENTIFIER argument_list block
  ;

call_expression
  : expression expressions
  ;

index_expression
  : expression OPEN_BRACKET expression CLOSE_BRACKET
  ;

expression
  : function_expression
  | call_expression
  | index_expression
  | OPEN_PAREN expression CLOSE_PAREN
  ;

function_statement
  : arraytype IDENTIFIER argument_list block
  ;

define_clause
  : IDENTIFIER
  | IDENTIFIER ASSIGN expression
  ;

define_chain
  : define_clause
  | define_clause COMMA define_chain
  ;

define_statement
  : arraytype define_chain
  ;

statement
  : function_statement
  | define_statement SEMICOLON
  | expression SEMICOLON
  ;

statements
  :
  | statement statements
  ;

Example parses:

// function 'fn' returns a reference to a void, parameterless function
void() fn() {}
// the parser doesn't know which of these are types and which are variables,
// so it doesn't know until the end that this is a call_expression
Object(var, var, var, var)
// the parser only finds out at the end that this is a function declaration
Object(var, var, var, var) fn2() {}
(Object(var, var, var, var) ())
(Object(var, var, var, var) () {})
// the parser could possibly detect the "string" type and figure out that
// this has to be a define statement or a function declaration, but it's
// still ambiguous to the end
Object(Object(string, Object), string[]) fn3() {}
Object(Object(string, Object), string[]) fn4 = fn3;

My basic approach has been to write functions that could parse the unambiguous components of this subset of the grammar, and then flatten the more complex control flow into individual functional blocks to capture state in function calls. This has proved unsuccessful, what techniques can one use to solve this kind of problem?

*There is likely a better word for this

by skeggse at July 02, 2015 08:31 PM

StackOverflow

Executing non-database actions in a transaction in Slick 3

I'm having trouble understanding the new Slick DBIOAction API, which does not seem to have a lot of examples in the docs. I am using Slick 3.0.0, and I need to execute some DB actions and also some calculations with the data received from the database, but all of those actions have to be done inside a single transaction. I'm trying to do the following:

  1. Execute a query to database (the types table).
  2. Do some aggregations and filtering of the query results (this calculation can't be done on the database).
  3. Execute another query, based on the calculations from step 2 (the messages table — due to some limitations, this query has to be in raw SQL).
  4. Join data from step 2 and 3 in memory.

I want the queries from step 1 and 3 to be executed inside a transaction, as the data from their result sets has to be consistent.

I've tried to do this in a monadic join style. Here's an overly simplified version of my code, but I can't even get it to compile:

  val compositeAction = (for {
    rawTypes <- TableQuery[DBType].result
    (projectId, types) <- rawTypes.groupBy(_.projectId).toSeq.map(group => (group._1, group._2.slice(0, 10)))
    counts <- DBIO.sequence(types.map(aType => sql"""select count(*) from messages where type_id = ${aType.id}""".as[Int]))
  } yield (projectId, types.zip(counts))).transactionally
  1. The first row of for comprehension selects the data from the types table.
  2. The second row of for comprehension is supposed to do some grouping and slicing of the results, resulting in a Seq[(Option[String], Seq[String])]
  3. The third row of for comprehension has to execute a set of queries for every element from the previous step, in particular, it has to execute a single SQL query for each of the values inside Seq[String]. So in the third row I build a sequence of DBIOActions.
  4. The yield clause zips types from the second step and counts from the third step.

This construction, however, does not work and gives two compile time errors:

Error:(129, 16) type mismatch;
 found   : slick.dbio.DBIOAction[(Option[String], Seq[(com.centreit.proto.repiso.storage.db.models.DBType#TableElementType, Vector[Int])]),slick.dbio.NoStream,slick.dbio.Effect]
    (which expands to)  slick.dbio.DBIOAction[(Option[String], Seq[(com.centreit.proto.repiso.storage.db.models.TypeModel, Vector[Int])]),slick.dbio.NoStream,slick.dbio.Effect]
 required: scala.collection.GenTraversableOnce[?]
        counts <- DBIO.sequence(types.map(aType => sql"""select count(*) from messages where type_id = ${aType.id}""".as[Int]))
               ^
Error:(128, 28) type mismatch;
 found   : Seq[Nothing]
 required: slick.dbio.DBIOAction[?,?,?]
        (projectId, types) <- rawTypes.groupBy(_.projectId).toSeq.map(group => (group._1, group._2.slice(0, 10)))
                           ^

I've tried to wrap the second line in a DBIOAction by using DBIO.successful, which is supposed to lift a constant value into the DBIOAction monad:

(projectId, types) <- DBIO.successful(rawTypes.groupBy(_.projectId).toSeq.map(group => (group._1, group._2.slice(0, 10))))

But in this code the types variable is inferred to be Any, and the code does not compile because of that.

by Sergey Petunin at July 02, 2015 08:29 PM

CompsciOverflow

Algorithms for minimizing Moore automata

Brzozowski's algorithm can be extended to Moore automata but its time complexity is exponential in general. Is there any other algorithm for minimization of Moore automata? What are the running times of these algorithms if any?

by Ajeet Singh at July 02, 2015 08:23 PM

/r/emacs

Emacs equivalent of vim: set ft=sh

What is Emacs equivalent of vim command:

:set ft=sh to set file as bash file, shell script :set ft=py to set file as python file :set ft=rb and so on to set as ruby file

??

To set current buffer file type.

submitted by eniacsparc2xyz
[link] [3 comments]

July 02, 2015 08:23 PM

CompsciOverflow

How to select the maximum weight value for a bias node in a neural network?

I'm programming a neural network. I know that I should initialize the network by picking random weights. How do I pick a random weight for the connections to bias nodes? What distribution should I use for these weights? I can pick a random value from the range $[0,U]$, but what value should I use for the upper limit $U$?

What I've tried: I've set $U$ to correspond to the percentage of inputs that the node needs to fire which is set from 0.0 to 1.0, multiplied by the number of inputs that node recieves. So a node with a value of 0.7 and 40 inputs would have $U = 0.7 \times 40 = 28$, and then the bias weight would be chosen uniformly at random from the interval $[0,U]$. Is this an okay way to set a bias weight? Which methods are standard?

by Samuel Mungy at July 02, 2015 08:20 PM

StackOverflow

Play 2.4.1, PlayEbean not found

After updating from 2.2 to 2.4, I followed the instructions on the Migration page, but am getting that error, saying the value PlayEbean was not found.

What am I doing wrong? As far as I can tell I only have to add that one line to the plugins.sbt file and it should work, right?

The files:

project/Build.scala:

import sbt._
import Keys._

import play.sbt.PlayImport._
import PlayKeys._

object BuildSettings {
    val appVersion        = "0.1"
    val buildScalaVersion = "2.11.7"

    val buildSettings = Seq (
        version      := appVersion,
        scalaVersion := buildScalaVersion
    )
}

object Resolvers {
    val typeSafeRepo = "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
    val localRepo = "Local Maven Repositor" at "file://"+Path.userHome.absolutePath+"/.m2/repository"
    val bintrayRepo = "scalaz-bintray" at "https://dl.bintray.com/scalaz/releases"
    val sbtRepo = "Public SBT repo" at "https://dl.bintray.com/sbt/sbt-plugin-releases/"

    val myResolvers = Seq (
        typeSafeRepo,
        localRepo,
        bintrayRepo,
        sbtRepo
    )
}

object Dependencies {
        val mindrot = "org.mindrot" % "jbcrypt" % "0.3m"
        val libThrift = "org.apache.thrift" % "libthrift" % "0.9.2"
        val commonsLang3 = "org.apache.commons" % "commons-lang3" % "3.4"
        val commonsExec = "org.apache.commons" % "commons-exec" % "1.3"
        val guava = "com.google.guava" % "guava" % "18.0"
        val log4j = "org.apache.logging.log4j" % "log4j-core" % "2.3"
        val jacksonDataType = "com.fasterxml.jackson.datatype" % "jackson-datatype-joda" % "2.5.3"
        val jacksonDataformat = "com.fasterxml.jackson.dataformat" % "jackson-dataformat-xml" % "2.5.3"
        val postgresql = "postgresql" % "postgresql" % "9.3-1103.jdbc41"

        val myDeps = Seq(
            // Part of play
            javaCore,
            javaJdbc,
            javaWs,
            cache,

            // User defined
            mindrot,
            libThrift,
            commonsLang3,
            commonsExec,
            guava,
            log4j,
            jacksonDataType,
            jacksonDataformat,
            postgresql
        )
}

object ApplicationBuild extends Build {
    import Resolvers._
    import Dependencies._
    import BuildSettings._

    val appName = "sandbox"

    val main = Project(
            appName, 
            file("."),
            settings = buildSettings ++ Seq (resolvers := myResolvers, libraryDependencies := myDeps)
        )
    .enablePlugins(play.PlayJava, PlayEbean)
    .settings(jacoco.settings: _*)
    .settings(parallelExecution in jacoco.Config := false)
    .settings(javaOptions in Test ++= Seq("-Xmx512M"))
    .settings(javaOptions in Test ++= Seq("-XX:MaxPermSize=512M"))
}

project/plugins.sbt:

// Use the Play sbt plugin for Play projects
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.4.1")

// The Typesafe repository
resolvers ++= Seq(
  "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/",
  "Local Maven Repositor" at "file://"+Path.userHome.absolutePath+"/.m2/repository",
  "scalaz-bintray" at "https://dl.bintray.com/scalaz/releases",
  "Public SBT repo" at "https://dl.bintray.com/sbt/sbt-plugin-releases/"
  )

libraryDependencies ++= Seq(
  "com.puppycrawl.tools" % "checkstyle" % "6.8",
  "com.typesafe.play" %% "play-java-ws" % "2.4.1",
  "org.jacoco" % "org.jacoco.core" % "0.7.1.201405082137" artifacts(Artifact("org.jacoco.core", "jar", "jar")),
  "org.jacoco" % "org.jacoco.report" % "0.7.1.201405082137" artifacts(Artifact("org.jacoco.report", "jar", "jar"))
)

// Plugin for code coverage
addSbtPlugin("de.johoop" % "jacoco4sbt" % "2.1.6")

// Play enhancer - this automatically generates getters/setters for public fields
// and rewrites accessors of these fields to use the getters/setters. Remove this
// plugin if you prefer not to have this feature, or disable on a per project
// basis using disablePlugins(PlayEnhancer) in your build.sbt
addSbtPlugin("com.typesafe.sbt" % "sbt-play-enhancer" % "1.1.0")

// Play Ebean support, to enable, uncomment this line, and enable in your build.sbt using
// enablePlugins(SbtEbean). Note, uncommenting this line will automatically bring in
// Play enhancer, regardless of whether the line above is commented out or not.
addSbtPlugin("com.typesafe.sbt" % "sbt-play-ebean" % "1.0.0")

by KdgDev at July 02, 2015 08:16 PM

Planet Emacsen

Pragmatic Emacs: Append to kill-ring

We previously looked at cycling through the history of the kill-ring (emacs’ clipboard). A somewhat related idea is that you can append text that you cut or copy onto the last entry of the kill-ring so that you can accumulate several pieces of text into a single clipboard entry.

You do this by preceding a cut or copy command with C-M-w (this also happens automatically if you use successive cut/copy commands without anything else in between).

This is explained in the emacs manual, but as a simple illustration in the animation below I cut the line bbbbbbb and then move to the ddddddd line and hit C-M-w before cutting that line. My paste command then pastes the two lines.

append-to-kill-ring.gif

by Ben Maughan at July 02, 2015 08:13 PM

Fefe

Uuuuuh das könnte teuer werden: Der Supreme Court ...

Uuuuuh das könnte teuer werden: Der Supreme Court von Oklahoma erlaubt Klagen von Hausbesitzern gegen Fracking-Ölfirmen, wenn ihr Eigentum von Fracking-Erdbeben beschädigt wurde. Der Nachweis dafür wird wahrscheinlich ein bisschen schwierig werden.

July 02, 2015 08:01 PM

StackOverflow

How can assign different return types to a function in Scala?

I am trying to write a function which should return different pairs depending on the input. I have override the "+ - / *" in Scala for my specific use. Each one ( +, -,* ,/) has three implementations based on the input. I have RDD and Float as inputs so it can be a + between RDD and RDD, or Float and RDD, or Float and Float and so on.

Now I am having a parser which reads expression from input like : RDD+1 , parse it and create postfix to make calculations easier like : RDD1+ and then I want to do a calculation using my implemented + . with the help of this algorithm I am trying to change it in a way to make it performing a calculation based on my input expression. For instance it contains:

 var lastOp: (Float, Float) => Float = add

How can I change this: (Float, Float) => Float to something that will accept (RDD, Float)|(RDD, RDD) |(Float, Float) => RDD = add // my implementation of add ???

Edition:

I added this part with the help of two answers below: Ok I wrote this :

     def lastop:(Either[RDD[(Int,Array[Float])], Float], Either[RDD[(Int,Array[Float])], Float]) => RDD[(Int,Array[Float])] = sv.+

in which sv is an instance from my other class that I have been override + in that but in two different ways so now I ma getting an error which I guess is because compiler gets confused about which implementation to use this is the

       error:  type mismatch;
       [error]  found   : (that: org.apache.spark.rdd.RDD[(Int, Array[Float])])org.apache.spark.rdd.RDD[(Int, Array[Float])] <and> (that: Float)org.apache.spark.rdd.RDD[(Int, Array[Float])]
       [error]  required: (Either[org.apache.spark.rdd.RDD[(Int, Array[Float])],Float], Either[org.apache.spark.rdd.RDD[(Int, Array[Float])],Float]) => org.apache.spark.rdd.RDD[(Int, Array[Float])]

Note: what it says it found are two different implementations for "+"

by Rubbic at July 02, 2015 07:55 PM

/r/emacs

Global evil mode

Is it possible to set the evil mode globally so all buffers have the same mode ?

Right now, it seems like each buffer has its own mode. While it convent for some cases, I need to keep checking mode everytime I switch between buffers.

submitted by dalavana
[link] [4 comments]

July 02, 2015 07:50 PM

StackOverflow

Is it good the avoid macro in this example?

I read that data > functions > macros

Say you want to evaluate code in a postfix fashion.

Which approach would be better?

;; Macro

(defmacro reverse-fn [expression]
  (conj (butlast expression) (last expression)))

(reverse-fn ("hello world" println))
; => "hello world"


;; Function and data

(def data ["hello world" println])

(defn reverse-fn [data] 
  (apply (eval (last data)) (butlast data)))

(reverse-fn ["hello world" println])
; => "hello world"

Thanks!

by leontalbot at July 02, 2015 07:49 PM

"eval" in Scala

Can Scala be used to script a Java application?

I need to load a piece of Scala code from Java, set up an execution scope for it (data exposed by the host application), evaluate it and retrieve a result object from it.

The Scala documentation shows how easy it is to call compiled Scala code from Java (because it gets turned into to regular JVM bytecode).

But how can I evaluate a Scala expression on the fly (from Java or if that is easier, from within Scala) ?

For many other languages, there is the javax.scripting interface. Scala does not seem to support it, and I could not find anything in the Java/Scala interoperability docs that does not rely on ahead-of-time compilation.

by Thilo at July 02, 2015 07:47 PM

How to capture the return value of a method in scala?

Is there a way to do the following in scala. Basically I want to perform a series of evaluations in a method and return a boolean value. I want to then perform some other operations outside the method using that boolean value.

consider we have a method:

def getValue (criteria: List[String])(implicit context: ExecutionContext):(Boolean) = {
 //do something
 return value}

if(getValue)  //ERROR: missing arguments for method getbbgValue;
// follow this method with `_' if you want to treat it as a partially applied function
 {
 // do something
 } 

Is this valid? Can we do the "if" comparison in any way? When I tried this way, I got the error as mentioned in the code.

by pal at July 02, 2015 07:37 PM

QuantOverflow

Do futures follow physical or risk-neutral distributions

I've spent a while looking for an answer to this question and while I feel it is a simple question I have not found an answer.

I know prices of option contracts follow an implied, risk-neutral distribution which is observable and equities follow an unobservable, physical distribution of returns. Now, do futures contracts follow a risk-neutral or physical distribution? Or is my thinking flawed at some point?

I'm still learning so I would greatly appreciate anyone providing me some direction with this.

by Joe Yurkanin at July 02, 2015 07:34 PM

/r/compsci

Where can I buy online the Poster from the History of Computer Museum?

I went to the History of Museum in Mountain View, CA, I bought the poster with the flowchart of the history of programming languages and I lost it!! I live very far away from the Museum, so I was wondering if anybody knows where I can buy it online? It looks something like this: http://codeincomplete.com/posts/2011/12/28/computer_history_museum/languages.v224.png

submitted by wh0knowswhat
[link] [1 comment]

July 02, 2015 07:20 PM

StackOverflow

Mocking database with Slick in ScalaTest + Mockito and testing UPDATE

The documentation for unit testing a Scala application https://www.playframework.com/documentation/2.4.x/ScalaTestingWithScalaTest talks about mocking the database access using Mockito. While this method works very well to test methods that get information from the database, I'm not seeing a clear solution how to test methods that insert, update or delete data.

This is what I have setup so far:

trait UserRepository { self: HasDatabaseConfig[JdbcProfile] =>
  import driver.api._

  class UserTable(tag: Tag) extends Table[userModel](tag, "users") {
     def id = column[Int]("id", O.PrimaryKey, O.AutoInc )
     def email = column[String]("email")
     def * = (id.?, email) <> (userModel.tupled, userModel.unapply _)
  }

  def allUsers() : Future[Seq[userModel]]
  def update(user: userModel) : Future[Int]
}

class SlickUserRepository extends UserRepository with HasDatabaseConfig[JdbcProfile] {
  import driver.api._
  protected val dbConfig = DatabaseConfigProvider.get[JdbcProfile](Play.current)

  private val users = TableQuery[UserTable]

  override def allUsers(): Future[Seq[userModel]] = {
     db.run(users.result)
  }

  def update(user: userModel): Future[Int] = {
     db.run(userTableQuery.filter(_.id === user.id).update(user))          
  }
}

class UserService(userRepository: UserRepository) {
  def getUserById(id: Int): Future[Option[userModel]] = {
     userRepository.allUsers().map { users =>
        users.find(_.id.get == id)
  }

  // TODO, test this...
  def updateUser(user: userModel): Future[Int] = {
     userRepository.update(user)
  }
}

And then my tests:

class UserSpec extends PlaySpec with MockitoSugar with ScalaFutures {
  "UserService" should {
    val userRepository = mock[UserRepository]
    val user1 = userModel(Option(1), "user1@test.com")
    val user2 = userModel(Option(2), "user2@test.com")

    // mock the access and return our own results
    when(userRepository.allUsers) thenReturn Future {Seq(user1, user2)}

    val userService = new UserService(userRepository)

    "should find users correctly by id" in {
      val future = userService.getUserById(1)

      whenReady(future) { user =>
        user.get mustBe user1
      }
    }

    "should update user correctly" in {
       // TODO test this
    }
}

I suppose I need to mock out the 'update' method and create a stub that takes the argument and updates the mocked data. However, my skills in Scala are limited and I can't wrap my head around it. Is there perhaps a better way?

Thanks!

by beefd0g at July 02, 2015 07:16 PM

Unwrap type variables

I am trying to "unwrap" a type variable from a generic type (without using reflection). E.g. in the case of an Option, the goal would be that the following (or similar) code compiles:

implicitly[OptionUnwrapper[Option[Int]] =:= Int]

I managed to come up with the following unwrapper:

trait OptionUnwrapper[T] {
  type Unwrap[_ <: Option[T]] = T
}

which can be used to as follows:

implicitly[OptionUnwrapper[Int]#Unwrap[Option[Int]] =:= Int]

The problem with this solutions is, that I have to declare the type (in my case Int) of interest, before it can be returned. Is there a way to leaf out this parameter like for instance in generic functions like:

def foo[T](x: Option[T]) = x.isDefined
foo[Int](Some(1)) // function call with unnecessary type parameter,
foo(Some(1))      // since compiler infers Int automatically

UPDATE 1 - Use Case:

In my actual use case, I want to receive a "tuple type" from a case class. i.e. lets say I have a case class

case class Person(name: (String, String), age: Int, hobbies: List[String])

I want something that yields the type of the unapplied Person. i.e.

type T = Tupled[Person] // T =:= Tuple3[Tuple2[String, String], Int, List[String]]

I tried to get to this point by using the type of the Person.unapply function, which would be

Function1[Person, Option[Tuple3[...]]]

From this point, the plan would be to "unwrap" the Function1[_, Option[X]] to X

by Zwackelmann at July 02, 2015 07:15 PM

Ansible error message

I'm trying to use ansible to build a docker image locally but I'm running into problems.

- hosts: all
  tasks:
    - name: Build Docker image
      local_action:
          module: docker_image
          path: .
          name: SlothSloth
          state: present

And my /etc/ansible/hosts contains

localhost   ansible_connection=local

But when I try to run it I get:

TASK: [Build Docker image] **************************************************** 
failed: [localhost -> 127.0.0.1] => {"failed": true, "parsed": false}
failed=True msg='failed to import python module: No module named docker.client'


FATAL: all hosts have already failed -- aborting

by tommy_p1ckles at July 02, 2015 07:09 PM

QuantOverflow

Is "eoddata" a good data source?

Not sure if this is a relevant question for site, but I am looking to move to www.eoddata.com as my data source.

If anyone has used it, can you tell me how the data quality is ?

I am currently parsing yahoo for my prices but its not a very efficient process.

What are the other sources of equity data out there ( affordable for retail investors like me) ? I am willing to pay \$250-\$300 per year for it.

by silencer at July 02, 2015 07:07 PM

StackOverflow

Play Framwork InjectedRoutesGenerator Error

I have created new Play + Scala project using latest typesafe activator and when trying to import in IntelliJ IDE I was getting below error,

info] Loading project definition from E:\Personal\Scala Workspace\DeployZip\project E:\Personal\Scala Workspace\DeployZip\build.sbt:18: error: not found: value routesGenerator routesGenerator := InjectedRoutesGenerator ^ [error] Type error in expression Consult IDE log for more details (Help | Show Log)

What are the possible reasons for this error?

by Nishan at July 02, 2015 07:04 PM

QuantOverflow

Distribution of stochastic integral

Suppose that $f(t)$ is a deterministic square integrable function. I want to show $$\int_{0}^{t}f(\tau)dW_{\tau}\sim N(0,\int_{0}^{t}|f(\tau)|^{2}d\tau)$$.

I want to know if the following approach is correct and/or if there's a better approach.

First note that $$\int_{0}^{t}f(\tau)dW_{\tau}=\lim_{n\to\infty}\sum_{[t_{i-1},t_{i}]\in\pi_{n}}f(t_{i-1})(W_{t_{i}}-W_{t_{i-1}})$$ where $\pi_{n}$ is a sequence of partitions of $[0,t]$ with mesh going to zero. Then $\int_{0}^{t}f(\tau)dW_{\tau}$ is a sum of normal random variables and hence is normal. So all we need to do is calculate the mean and variance. Firstly: \begin{eqnarray*} E(\lim_{n\to\infty}\sum_{[t_{i-1},t_{i}]\in\pi_{n}}f(t_{i-1})(W_{t_{i}}-W_{t_{i-1}})) & = & \lim_{n\to\infty}\sum_{[t_{i-1},t_{i}]\in\pi_{n}}f(t_{i-1})E(W_{t_{i}}-W_{t_{i-1}})\\ & = & \lim_{n\to\infty}\sum_{[t_{i-1},t_{i}]\in\pi_{n}}f(t_{i-1})\times0\\ & = & 0 \end{eqnarray*} due to independence of Wiener increments. Secondly: \begin{eqnarray*} var(\int_{0}^{t}f(\tau)dW_{\tau}) & = & E((\int_{0}^{t}f(\tau)dW_{\tau})^{2})\\ & = & E(\int_{0}^{t}f(\tau)^{2}d\tau) \end{eqnarray*} by Ito isometry.

by fushsialatitude at July 02, 2015 07:02 PM

StackOverflow

ssh to vagrant servers not posible when defining several servers in one vagrant file

Does anyone know if there are any problems regarding ssh access to servers when several servers are defined in the vagrant file ?

Here is the content of my vagrant file:

VAGRANTFILE_API_VERSION = "2"

 Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
 config.ssh.insert_key = false
 config.vm.provider :virtualbox do |vb|
 vb.customize ["modifyvm", :id, "--memory", "256"]
 end


# Application server 1.
config.vm.define "app1" do |app|
app.vm.hostname = "orc-app1.dev"
app.vm.box = "centos7"
app.vm.network :public_network, ip: "192.168.60.4"
config.ssh.forward_agent = true
end

# Application server 2.
config.vm.define "app2" do |app|
config.ssh.forward_agent = true
app.vm.hostname = "orc-app2.dev"
app.vm.box = "centos7"
app.vm.network :private_network, ip: "192.168.60.5"
end

# Database server.
config.vm.define "db" do |db|
config.ssh.forward_agent = true
db.vm.hostname = "orc-db.dev"
db.vm.box = "centos7"
db.vm.network :private_network, ip: "192.168.60.6"
end
end

Vagrant ssh app1 works just fine, but if I try to access the server with ssh like ssh vagrant@192.168.60.6 it is not able to connect

The strange part is that I have to trouble to access it with "normal" ssh if i define each of the servers in seperate vagrant files.

I need normal ssh access cause I am trying to test my ansible playbooks before I go "live" on my cloud servers.

Are there any settings in the vagrant file I am missing? Why does this work if I have seperate vagrant files for each server ?

by Hans Jacob Melby at July 02, 2015 07:01 PM

Fefe

Kennt ihr den schon?In London fanden die Fachleute ...

Kennt ihr den schon?
In London fanden die Fachleute in der Residenz eine Tür, deren Schlüssel unauffindbar war – und die, so Hausangestellte, seit 14 Jahren nicht mehr geöffnet worden sei. Was sich dahinter befand, "konnte nicht untersucht werden".
Ein Theater? Ein Altersheim? Nein! Die Residenz deutscher Diplomaten. Hey, wer würde schon ausländische Diplomaten verwanzen wollen oder so! Pfft! Als ob die jemals für irgendjemanden interessante Dinge täten oder wüssten!1!!

Wer sich jetzt denkt, naja, London, Dauerregen, englisches Essen, das ist bestimmt das Abstellgleis der Diplomatie, da schickt man halt die Leute hin, die eh durch sind, in anderen Botschaften läuft das bestimmt besser ab! Für diejenigen habe ich schlechte Nachrichten:

Der BND rügte zudem, dass die Telefonanlage nicht von einer deutschen Firma gewartet wurde, sondern ortsansässige Techniker Administratorenrechte besaßen
Das war in Bairut. Und wenig überraschend wurden ihnen dann Telefonate abgeschnorchelt.

Andere Telefonanlagen hatten keine Adminaccounts in ausländischer Hand, waren aber auch nicht besser in Schuss:

Andere erlaubten, dass der Anrufschutz durchbrochen werden konnte, dass die Anlage bei einem Besetztzeichen also selbständig nach einem anderen, freien Anschluss im Gebäude suchte. Vielfach hatten die Sicherheitsbeauftragten auch versäumt, die "Aufschalt"-Funktion der Telefonanlagen zu deaktivieren.
Ein Glück, dass unsere Geheimdienste so fit sind, dass sowas schnell auffällt! Also konkret in diesem Fall bei panische Spot-Checks nach den Snowden-Enthüllungen. Vorher Jahrezehntelang Schweigen im Walde, ab und zu mal ein Mahnbrief, man möge doch mal folgene Dinge fixen. Aber die Diplomaten haben es da nicht eilig, die sind sogar zu faul, für ihre sensiblen Besprechungen die eigens gebauten abhörsicheren Räume zu benutzen. Zu unbequem! Lieber in den Meetingraum mit der Glasfront, und Fenster auf für Belüftung und so!

Und sonst so?

In Paris fanden sie im Besprechungsraum einen "Mauerdurchbruch", den sie als "sehr bedenklich" einstuften. Zudem entdeckten sie unbenötigte Heizungsrohrenden. [...] Ebenso in Tel Aviv, wo Deckendurchbrüche nicht verschlossen waren, obwohl das bereits Jahre zuvor angemahnt worden war.
Da weiß man, was man hat! Ich weiß, ich weiß! Die brauche alle eine Gehaltserhöhung, dann machen sie auch ihren Job richtig!1!! Lacht nicht, das argumentiere unsere Abgeordneten auch bei jeder Diätenerhöhung so und kommen damit durch!

July 02, 2015 07:01 PM

Heute hat Pofalla im BND-Untersuchungsausschuss den ...

Heute hat Pofalla im BND-Untersuchungsausschuss den ganz großen Nebelwerfer rausgeholt. Ach naja, Abkommen, Übereinkunft, Vertrag, ist doch alles das Gleiche!1!! Im Übrigen ist mein Namen Hase, ich weiß von nichts. Und diese Mürbeteigware war für die Geheimdienstkoordination (!) in diesem Land zuständig! Sehr viel Text, aber für ein Money Quote: Ctrl-F 19

Unglaublich.

July 02, 2015 07:01 PM

StackOverflow

How to setup a method that takes an implicit parameter in scala using Mockito

I am getting the familiar "Invalid use of argument matchers" trying to setup a method that takes a regular parameter and an implicit parameter.

trait MyLogger {
    def debug(msg: => Any)(implicit context: LoggingContext)
}

val implicit lc = EmptyLoggingContext
val myMock = mock[MyLogger]

when(myMock.debug(any())(any())).thenAnswer(
    new Answer[Unit]() {
        override def answer(invocation: InvocationOnMock): Unit = ???
    }
)

With the above, I get "2 matchers expected, 1 recorded"

If I change it to:

when(myMock.debug(any())).thenAnswer....

I don't get a matcher error, but the Answer override is never invoked.

I have also tried:

when(myMock.debug(any())(any(classOf[LoggingContext]))

which again gives the 2 matchers expected error.

Any suggestions much appreciated.

Thanks.

UPDATE: The issue here turns out to be that msg is a by-name parameter which is not mockable in mockito

by Sydney at July 02, 2015 06:54 PM

QuantOverflow

What are the different algorithmic trading strategies available for an individual trader [on hold]

I am new to algorithmic trading (automatic or semi-automatic) and i am really curious about its working. To understand more about algo trading I am reading the book An introduction to algorithmic trading basic to advance strategies. I am not from a financial or mathematical background but have a little understanding of programming. In that book they talk breifly about the different algos such as vmap,pov,twap which are used in teir I corporates. can these algos can also be used for Individual trading? What kind of algos do you recommend to a new algo trader? Where can i find tutorials regarding those stratergies

by Eka at July 02, 2015 06:50 PM

StackOverflow

How do I share ansible variables across hosts using vars_prompt

In ansible, I do some code on a remote machine, and then I want to use the result from the vars_prompt again on a different host. I've searched around, and the docs make it sound like i should use {{ hostvars.local_server_name_in_vagrant_group.local_username }}, using my example below to set the context. However, it says that the index from the dictionary doesn't exist when referencing hostvars. Instead, as shown below, I simply do a vars_prompt twice. Gross! Any tips?

BTW, there's also discussion on whether or not using vars_prompt is a great idea. I have confirmed that for my usage, indeed, I do want to use vars_prompt. Thanks!


- hosts: vagrant
  vars_prompt:
    local_username: "enter your desired local username"

... remote task activity using local_username...

- hosts: localhost
  connection: local

  vars_prompt:
    local_username: "enter your desired local username, again (please)  

... host task activity, also using local_username ...

by cdaringe at July 02, 2015 06:44 PM

Shapeless HList type checking

I am using Shapeless and have the following method to compute the difference between two HLists:

  def diff[H <: HList](lst1: H, lst2:H):List[String] = (lst1, lst2) match {
    case (HNil, HNil)                 => List()
    case (h1::t1, h2::t2) if h1 != h2 => s"$h1 -> $h2" :: diff(t1, t2)
    case (h1::t1, h2::t2)             => diff(t1, t2)
    case _                            => throw new RuntimeException("something went very wrong")
  }

Since both parameters to the method take an H, I would expect HLists of different types to not compile here. For example:

diff("a" :: HNil, 1 :: 2 :: HNil)

Shouldn't compile but it does, and it produces a runtime error: java.lang.RuntimeException: something went very wrong. Is there something I can do to the type parameters to make this method only accept two sides with identical types?

by triggerNZ at July 02, 2015 06:28 PM

Lobsters

QuantOverflow

Fractals indicator (Bill Williams) R Quantstrat

Hi has anyone seen or know how to create an indicator for fractals in quantstrat?

fractals explained http://forex-indicators.net/bill-williams/fractals

example code (only interested in type 1 fractal) http://forexsb.com/forum/topic/68/fractals/

by user1736644 at July 02, 2015 06:19 PM

StackOverflow

Ansible - Variable precedence fails when hosts are the same

I want to make the most of variable precedence with ansible.

So let’s have a look at this simplified project:

├── group_vars
│   └── all
│   └── web
├── hosts
│   └── local
└── site.yml

The inventory file hosts/local:

[local_web]
192.168.1.20

[local_db]
192.168.1.20

[web:children]
local_web

[db:children]
local_db

The group_vars/all file:

test: ALL

The group_vars/web file:

test: WEB

The site.yml file:

---
- name: Test
  hosts: db
  tasks:
    - debug: var=test

Alright, so this is just to test variable precedence. As I run ansible in the db group, the test variable should display “ALL”, as ansible will only looks into group_vars/all, right? Wrong:

TASK: [debug var=test] ******************************************************** 
ok: [192.168.1.20] => {
    "var": {
        "test": "WEB"
    }
}

Actually, if local_web and local_db hosts are different, then it works.

Why ansible still looks into an unrelated config file when hosts are the same? Is that a bug or just me?

by Gui-Don at July 02, 2015 06:17 PM

min/max of collections containing NaN (handling incomparability in ordering)

I just ran into a nasty bug as a result of the following behavior:

scala> List(1.0, 2.0, 3.0, Double.NaN).min
res1: Double = NaN

scala> List(1.0, 2.0, 3.0, Double.NaN).max
res2: Double = NaN

I understand that for a pairwise comparison it may sometimes be preferable to have max(NaN, 0) = NaN and this is probably the reason why java.lang.Double.compare follows this convention (there seems to be an IEEE standard for that). For a collection however, I really think that this is a strange convention. After all the above collection does contain valid numbers; and these numbers have a clear maximum and minimum. In my opinion, the conception that the maximum number of the collection is not a number is a contradiction since, well, NaN is not a number, so it cannot be the maximum or minimum "number" of a collection -- unless there are no valid numbers at all; in this case it makes perfect sense that the maximum "is not a number". Semantically the min and max functions degenerate to a check whether the collection contains a NaN. Since there are more appropriate ways to check for the existence of NaNs (e.g. collection.find(_.isNaN)) it would be great to maintain a semantically meaningful min/max on collections.

So my question is: What is the best approach to obtain the behavior to ignore the existence of NaNs? I see two possibilities:

  1. Filtering NaNs before calling min/max. Since this requires explicit handling of the issue in all places and may incur performance penalties I would prefer something easier.

  2. It would be great to have a kind of NaN-ignoring ordering which can be used as an implicit ordering wherever necessary. I tried the following:

      object NanAwareOrdering extends Ordering[Double] {
        def compare(x: Double, y: Double) = {
          if (x.isNaN()) {
            +1 // without checking x, return y < x
          } else if (y.isNaN()) {
            -1 // without checking y, return x < y
          } else {
            java.lang.Double.compare(x, y)
          }
        }
      }
    

    However, this approach seems to depend on whether I'm interesting in finding the min or max, i.e.:

     scala> List(1.0, 2.0, 3.0, Double.NaN).min(NanAwareOrdering)
     res7: Double = 1.0
    
     scala> List(1.0, 2.0, 3.0, Double.NaN).max(NanAwareOrdering)
     res8: Double = NaN
    

    This means that I would have to have two NanAwareOrdering depending on whether I want the minimum or the maximum, which would forbid to have an implicit val. Therefore my question is: How can I define an ordering in a way to handle both cases at once?

Update:

For the sake of completeness: In the course of analyzing the issue I realized that the premise "degenerates to a check for NaN" is actually wrong. In fact, I think it is even more ugly:

scala> List(1.0, Double.NaN).min
res1: Double = NaN

scala> List(Double.NaN, 1.0).min
res2: Double = 1.0

by bluenote10 at July 02, 2015 06:16 PM

TheoryOverflow

Information-theoretic Diffie-Hellman

The following non-standard description of Diffie-Hellman is entirely my own, by which I mean that I came up with it having not read about it anywhere else beforehand.

In Diffie-Hellman Alice and Bob choose numbers $x$ and $y$ in a fine representation and publish $x$ and $y$ in a coarser form from which they can both determine $xy$ in coarse form. A form is considered coarse if the product of two numbers in the coarse form is (practically) uncomputable, but the product of a number in the coarse form and another number in the fine form is computable.

So is there an information-theoretic analogue? My thoughts are that a number $x \in [0,1] \subset \Bbb R$ can be represented:

  • In a fine way using an upper-bound and lower-bound oracle.
  • In one coarse form by using an upper-bound oracle.
  • In another coarse form by using a lower-bound oracle.

Is there any literature on this?

Cheers

by NaN at July 02, 2015 06:16 PM

StackOverflow

Ansible to automate a couple of easy tasks

I am new to Ansible and just started learning it. I have a couple of steps that I do manually on a weekly basis and was tasked to use Ansible to automate this, wasn't sure if this is something that Ansible can do. This is my current environment:

1) I have an artifactory located on a URL(https://Artifactory.company.com/index.html)

2) I have a YAML file under Git, in that file I have so many tarballs

3) I will need to go to the artifactory URL, inside the search box, type the name of the tarballs one by one and click the download button

4) The tarballs would get downloaded to my persona; machine under /home/Download

5) I then will need to scp all these files to a remote server: scp test.tar.gz user@192.195.151.12:/tmp

6) I then need to SSH to that remote host and move the files from the tmp directory to /srv/test directory

Can Ansible automate all these steps or it can only automate the last two steps? Where do I start this? Any help and pointers would be extremely helpful, I have to produce something by Monday.

Thanks

by Irina at July 02, 2015 06:06 PM

Halfbakery

/r/compsci

Current state of genetic algorithm/programming research?

I recall that in the late 90's the search technique fell out of favor as being too hand-tuned and lacking theory. Curious if there's any current respectable research going on etc.

submitted by maxpowersb99
[link] [11 comments]

July 02, 2015 05:54 PM

StackOverflow

An 'override' in front of the method while implementing an abstract class? [duplicate]

This question already has an answer here:

In Scala:

Do one need to place an override keyword while implementing an abstract method of an abstract class?

by Alex Khvatov at July 02, 2015 05:53 PM

Getting the string representation of a type at runtime in Scala

In Scala, is it possible to get the string representation of a type at runtime? I am trying to do something along these lines:

def printTheNameOfThisType[T]() = {
  println(T.toString)
}

by Kevin Albrecht at July 02, 2015 05:47 PM

CompsciOverflow

Formula for sufficiently lengthy encryption key?

As you add length to an encryption key, at some point the message becomes impossible to brute-force decrypt. This is because at that point, if you go through all the possible keys, you'll get many meaningful decryptions just by random chance and you won't be able to determine which was the original message.

As you add length to the message though, these meaningful decryptions become rarer until there is once again a small enough number of them left to figure out which is the right one (if you know what you're looking for, that is).

Has anybody figured out a way to estimate the required key length for this obfuscation by quantity to happen for more popular encryption algorithms?

by user35159 at July 02, 2015 05:41 PM

/r/compsci

Lobsters

QuantOverflow

Boundary Condition for Convertible Bond under Two-factor Model Interest Rate

I want to find Boundary conditions for Convertible Bond under Two-factor Model Interest Rate.The portfolio contains stock where stochastic differential equation for the stock price is \begin{align} ds_t=rS_t+\sigma S_tdW_1(t) \end{align} where $sigma$ is constant and dynamics of $r$ as follow \begin{align} dr_t=\kappa(\theta-r_t)dt+\Sigma dW_2(t) \end{align} I was confused.please guide me

by Joe fritz at July 02, 2015 05:35 PM

Proving $\mathbb{E}(g(X)) = \int_{\mathbb{R}} g(x) f(x) dx$

Let $X$ be a random variable on a probability space $(\Omega, \mathcal{F}, P)$ and let $g$ be a Borel-measurable function on $\mathbb{R}$. In Shreve II (p 28) he proves, using the standard machine, that $$ \mathbb{E}(g(X)) = \int_{\mathbb{R}} g(x)\, d(P \circ X^{-1})(x), $$ where $P \circ X^{-1}$ is the pushforward measure on $\mathbb{R}$. He then again uses the standard machine to prove that, for a continuous random variable $X$, that $$ \mathbb{E}(g(X)) = \int_{\mathbb{R}} g(x) f(x) d\lambda(x), $$ where $f$ is the probability density function of $X$ and $\lambda$ is the Lebesgue measure on $\mathbb{R}$.

My question is, is the standard machine really necessary for this second part? By definition of a continuous random variable, $f = \frac{d(P \circ X^{-1})}{d \lambda}$, and so $$ \int_{\mathbb{R}} g(x)\, d(P \circ X^{-1})(x) = \int_{\mathbb{R}} g(x)f(x) d\lambda(x) $$ since $f$ is the Radon-Nikodym derivative of $P \circ X^{-1}$ w.r.t. $\lambda$.

Perhaps I am overlooking some integrability conditions?

by bcf at July 02, 2015 05:28 PM

Lobsters

TheoryOverflow

Assignment problem with multiple workers for each job

I am wondering if there are any results on the following version of the assignment problem. We are given a set of jobs $J$ and a set of workers $W$, and for each job $j$ and worker $w$ we are given the expertise of the worker for the job $\omega(w,j)$. The goal is to select a subset of jobs $S\subseteq J$ and assign exactly two workers to each job $j \in S$ while maximizing the total expertise.

That is, if $x_{w, j} = 1$ indicates that the worker $w$ has been assigned to job $j$, solve the following program:

$$ maximize \sum_{j \in S} x_{w, j} \cdot \omega(w, j) $$ s.t. $$ \sum_{j\in S} x_{w, j} \leq 1, \forall w\in W $$ $$ \sum_{w\in W} x_{w, j} = 2, \forall j \in S $$ $$ x_{w, j} \in \{0, 1\} $$

Edit:

It seems like my explanation is not very clear so I am adding a simple example. Consider the following graph where dotted edges have weight 0 and solid edges have weight 1. Worker $i$ has expertise $1$ in job $i$ and expertise $0$ in the other job.

The two possible solutions both have value 1: either assign both workers to job 1, or assign both workers to job 2. Observe that assigning worker 1 to job 1 and worker 2 to job 2 would result in a solution of value 2, but it is not a valid solution since a job must be assigned exactly 2 workers.

enter image description here

by George Octavian Rabanca at July 02, 2015 05:23 PM

/r/netsec

Lobsters

StackOverflow

Scala:case class runTime Error

This demo ran Ok. But when I move it to another class function(my former project) and call the function, it compiles failure.

     object DFMain {
         case class Person(name: String, age: Double, t:String)
         def main (args: Array[String]): Unit = {
         val sc = new SparkContext("local", "Scala Word Count")
         val sqlContext = new org.apache.spark.sql.SQLContext(sc)
         import sqlContext.implicits._
         val bsonRDD = sc.parallelize(("foo",1,"female")::
                                        ("bar",2,"male")::
                                     ("baz",-1,"female")::Nil)
                      .map(tuple=>{
                    var bson = new BasicBSONObject()
                    bson.put("name","bfoo")
                    bson.put("value",0.1)
                    bson.put("t","female")
                    (null,bson)
                 })
    val tDf = bsonRDD.map(_._2)
              .map(f=>Person(f.get("name").toString,
                   f.get("value").toString.toDouble,
                   f.get("t").toString)).toDF()

       tDf.limit(1).show()
 }
}

'MySQLDao.insertIntoMySQL()' compile error

object MySQLDao {
     private val sc= new SparkContext("local", "Scala Word Count")
     val sqlContext = new org.apache.spark.sql.SQLContext(sc)
     import sqlContext.implicits._

     case class Person(name: String, age: Double, t:String)
     def insertIntoMySQL(): Unit ={

      val bsonRDD = sc.parallelize(("foo",1,"female")::
                                     ("bar",2,"male")::
                                     ("baz",-1,"female")::Nil)
                       .map(tuple=>{
               val bson = new BasicBSONObject()
               bson.put("name","bfoo")
               bson.put("value",0.1)
               bson.put("t","female")
               (null,bson)
         })
 val tDf = bsonRDD.map(_._2).map( f=> Person(f.get("name").toString,
                                           f.get("value").toString.toDouble,
                                            f.get("t").toString)).toDF()

   tDf.limit(1).show()

 }
} 

Will, when I call 'MySQLDao.insertIntoMySQL()' gets the Error of

value typedProductIterator is not a member of object scala.runtim.scala.scalaRuntTime

case class Person(name: String, age: Double, t:String)

by yeyimilk at July 02, 2015 05:10 PM

Preprocessing Twitter data - Spark -scala

I am working on a Spark Streaming application with Twitter data. Out of incoming Twitter data, I need to filter out stop words and other non alpha-numeric charters. My code is like this -

    val sparkConf = new SparkConf().setAppName("PREPROCESSING")
    val sc = new SparkContext(sparkConf)
    // collect stopwords
    val stopwordList = sc.textFile("file://path to -stopwords.txt").collect()
    System.setProperty("twitter4j.oauth.consumerKey", "")
    System.setProperty("twitter4j.oauth.consumerSecret", "")
    System.setProperty("twitter4j.oauth.accessToken", "")
    System.setProperty("twitter4j.oauth.accessTokenSecret","")
    val ssc = new StreamingContext(sc, Seconds(2))
    val stream = TwitterUtils.createStream(ssc, None)
    //Filter out non-English Tweets
    val lanFilter = stream.filter(status => status.getUser.getLang == "en")
    // Filter out nonalpha-numeric chars and tokenize the words
    val RDD1 = lanFilter.flatMap(status => status.getText.replaceAll("[^a-zA-Z0-9\\s]", " ").toLowerCase().split(" "))
    // Filter out words starting with #,@ and Null strings
    val filterRDD = RDD1.filter(word => !word.startsWith("#")).filter(word => !word.startsWith("@")).filter(word => !word.contains(" "))//.filter(word => word != " ")
    // filter out stopwords
    val filterStopwordRDD = filterRDD.filter(word => stopwordList.forall(stopword => !word.contains(stopword)))
    // Print preprocessed tokens
    filterStopwordRDD.print()

if I run the above job, I am supposed to get preprocessed tokens with the specified filters. But I am getting some numeric values and many Null tokens. Can you tell me where I am doing wrong? . Thank you.

The output is like -

-------------------------------------------
Time: 1435856092000 ms
-------------------------------------------








30

...
-------------------------------------------
Time: 1435856094000 ms
-------------------------------------------






5



...
-------------------------------------------
Time: 1435856096000 ms
-------------------------------------------










...
-------------------------------------------
Time: 1435856098000 ms
-------------------------------------------










...

and output is not giving any word tokens.

by SNR at July 02, 2015 05:01 PM

DataTau

StackOverflow

How to create a VertexId in Apache Spark GraphX using a Long data type?

I'm trying to create a Graph using some Google Web Graph data which can be found here:

https://snap.stanford.edu/data/web-Google.html

import org.apache.spark._
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD



val textFile = sc.textFile("hdfs://n018-data.hursley.ibm.com/user/romeo/web-Google.txt")
val arrayForm = textFile.filter(_.charAt(0)!='#').map(_.split("\\s+")).cache()
val nodes = arrayForm.flatMap(array => array).distinct().map(_.toLong)
val edges = arrayForm.map(line => Edge(line(0).toLong,line(1).toLong))

val graph = Graph(nodes,edges)

Unfortunately, I get this error:

<console>:27: error: type mismatch;
 found   : org.apache.spark.rdd.RDD[Long]
 required: org.apache.spark.rdd.RDD[(org.apache.spark.graphx.VertexId, ?)]
Error occurred in an application involving default arguments.
       val graph = Graph(nodes,edges)

So how can I create a VertexId object? For my understanding it should be sufficient to pass a Long.

Any ideas?

Thanks a lot!

romeo

by Romeo Kienzler at July 02, 2015 04:58 PM

/r/scala

StackOverflow

How can I find a Formatter type class for reactivemongo.bson.BSONObjectID?

I'm working on a web application using Play Framework (2.2.6) / scala / mongoDB, and I have a problem with reactivemongo.bson.BSONObjectID. (I'm a beginner in both ReactiveMongo and Scala)

My controller contains this code :

val actForm = Form(tuple(
    "name" -> optional(of[String]),
    "shortcode" -> optional(of[String]),
    "ccam" -> mapping(
        "code" -> optional(of[String]),
        "description" -> optional(of[String]),
        "_id" -> optional(of[BSONObjectID])
    )(CCAMAct.apply)(CCAMAct.unapply)
));

def addAct = AsyncStack(AuthorityKey -> Secretary) { implicit request =>
    val user = loggedIn
    actForm.bindFromRequest.fold(
    errors => Future.successful(BadRequest(errors.errorsAsJson)), {
      case (name, shortcode, ccam) =>

        val newact = Json.obj(
          "id" -> BSONObjectID.generate,
          "name" -> name,
          "shortcode" -> shortcode,
          "ccam" -> ccam
        )

        settings.update(
            Json.obj("practiceId" -> user.practiceId.get),
            Json.obj("$addToSet" -> Json.obj("acts" -> Json.obj("acte" -> newact)))
        ).map { lastError => Ok(Json.toJson(newact)) }
    })
}

The CCAMAct class is defined like this :

import models.db.Indexable
import play.api.libs.json._
import reactivemongo.bson.BSONObjectID
import reactivemongo.api.indexes.{Index, IndexType}
import models.db.{MongoModel, Indexable}
import scala.concurrent.Future
import play.modules.reactivemongo.json.BSONFormats._
import models.practice.Practice
import play.api.libs.functional.syntax._

case class CCAMAct(code:Option[String],
                   description:Option[String],
                   _id: Option[BSONObjectID] = None) extends MongoModel {}

object CCAMAct extends Indexable {

    private val logger = play.api.Logger(classOf[CommonSetting]).logger

    import play.api.Play.current
    import play.modules.reactivemongo.ReactiveMongoPlugin._
    import play.modules.reactivemongo.json.collection.JSONCollection
    import scala.concurrent.ExecutionContext.Implicits.global

    def ccam: JSONCollection = db.collection("ccam")

    implicit val ccamFormat = Json.format[CCAMAct]

    def index() = Future.sequence(
        List (
            Index(Seq("description" -> IndexType.Text))
        ).map(ccam.indexesManager.ensure)
    ).onComplete { indexes => logger.info("Text index on CCAM ends") }
}

Thus the compiler throws me this error :

Cannot find Formatter type class for reactivemongo.bson.BSONObjectID. Perhaps you will need to import play.api.data.format.Formats._
       "_id" -> optional(of[BSONObjectID])
                           ^

(Of course I have already imported "play.api.data.format.Formats._")

I also tried to add a custom Formatter following answers from similar posts on the web.

object Format extends Format[BSONObjectID] {

    def writes(objectId: BSONObjectID): JsValue = JsString(objectId.stringify)

    def reads(json: JsValue): JsResult[BSONObjectID] = json match {
        case JsString(x) => {
            val maybeOID: Try[BSONObjectID] = BSONObjectID.parse(x)
            if(maybeOID.isSuccess) 
                JsSuccess(maybeOID.get) 
            else {
                JsError("Expected BSONObjectID as JsString")
            }
        }
        case _ => JsError("Expected BSONObjectID as JsString")
    }
}

...without any success.

Anyone knows a clean solution to resolve this problem?

by Xan at July 02, 2015 04:52 PM

Scala. Get field names list from case class

I need to get only field names of case class, i'm not interested in its values. I thought it's not the problem and getClass.getDeclaredFields.map(_.getName) would return me a list of field names. And i can't find similar question.

scala> case class User(id: Int, name: String)
defined class User

scala> User.getClass.getDeclaredFields
res14: Array[java.lang.reflect.Field] = Array(public static final User$ User$.MODULE$)

scala> User.getClass.getDeclaredFields.toList
res15: List[java.lang.reflect.Field] = List(public static final User$ User$.MODULE$)

scala> val user = User(1, "dude")
user: User = User(1,dude)

scala> user.getClass.getDeclaredFields.toList
res16: List[java.lang.reflect.Field] = List(private final int User.id, private final java.lang.String User.name)

What is this User$.MODULE$? What's that?

Method getDeclaredFields works fine when you got instance of case class, but i don't want to create an instance in order to get only fields

Why this isn't true? User.getClass.getDeclaredFields(_.getName) == List("id", "name")

by Alexander Kondaurov at July 02, 2015 04:47 PM

CompsciOverflow

What happens when two processes hold an election using bully algorithm simultaneously

A coordinator is used to sychronise the usage of resources in a distributed system.

The Bully Algorithm is a method to elect a new coordinator in a distributed system when the current coordinator disappears for some reason.

I am just wondering what happens when two processes realise the demise of the coordinator simultaneously and both decide to hold an election using the Bully Algorithm?

by Computernerd at July 02, 2015 04:45 PM

TheoryOverflow

How expensive may it be to destroy all long s-t paths in a DAG?

We consider DAGs (directed acyclic graphs) with one source node $s$ and one target node $t$; parallel edges joining the same pair of vertices are allowed. A $k$-cut is a set of edges whose removal destroys all $s$-$t$ paths longer than $k$; shorter $s$-$t$ paths as well as long "inner" paths (those not between $s$ and $t$) may survive!

Question: Is it enough to remove at most about a $1/k$ portion of edges from a DAG in order to destroy all $s$-$t$ paths longer than $k$?

That is, if $e(G)$ denotes the total number of edges in $G$, does then every DAG $G$ have a $k$-cut with at most about $e(G)/k$ edges? Two examples:

  1. If all $s$-$t$ paths have length $> k$, then a $k$-cut with $\leq e(G)/k$ edges exists. This holds because then there must be $k$ disjoint $k$-cuts: just layer the nodes of $G$ according to their distance from the source node $s$.
  2. If $G=T_n$ is a transitive tournament (a complete DAG), then also a $k$-cut with $\leq k\binom{n/k}{2} \approx e(G)/k$ edges exists: fix a topological ordering of nodes, split the nodes into $k$ consecutive intervals of length $n/k$, and remove all edges joining the nodes of the same interval; this will destroy all $s$-$t$ paths longer than $k$.

Remark 1: A naive attempt to give a positive answer (which I also tried as first) would be to try to show that every DAG must have about $k$ disjoint $k$-cuts. Unfortunately, Example 2 shows that this attempt can badly fail: via a nice argument, David Eppstein has shown that, for $k$ about $\sqrt{n}$, the graph $T_n$ cannot have more than four disjoint $k$-cuts!

Remark 2: It is important that a $k$-cut needs only to destroy all long $s$-$t$ paths, and not necessarily all long paths. Namely, there exist1 DAGs in which every "pure" $k$-cut (avoiding edges incident to $s$ or $t$) must contain almost all edges. So, my question actually is: can the possibility to remove also edges incident with $s$ or $t$ substantially reduce the size of a $k$-cut? Most probably, the answer is negative, but I could not find a counterexample as yet.

Motivation: My question is motivated by proving lower bounds for monotone switching-and-rectifier networks. Such a network is just a DAG, some of whose edges are labeled by tests "is $x_i=1$?" (there are no tests $x_i=0$). The size of a network is the number of labeled edges. An input vector is accepted, if there is an $s$-$t$ path all whose tests are consistent with this vector. Markov has proved that, if a monotone boolean function $f$ has no minterms shorter than $l$ and no maxterms shorter than $w$, then size $l\cdot w$ is necessary. A positive answer to my question would imply that networks of size about $k\cdot w_k$ are necessary, if at least $w_k$ variables must be set to $0$ in order to destroy all minterms longer than $k$.


1The construction is given in this paper. Take a complete binary tree $T$ of depth $\log n$. Remove all edges. For every inner node $v$, draw an edge to $v$ from every leaf of the left subtree of $T_v$, and an edge from $v$ to every leaf of the right subtree of $T_v$. Thus, every two leaves of $T$ are connected by a path of length $2$ in the DAG. The DAG itself has $\sim n$ nodes and $\sim n\log n$ edges, but $\Omega(n\log n)$ edges must be removed in order to destroy all paths longer than $\sqrt{n}$.

by Stasys at July 02, 2015 04:45 PM

QuantOverflow

Why do people always seek finite-variance models for option pricing

For the purpose of getting fatter tails than the Guassian, I have seen people for example use $\alpha$-stable processes to model the stock. But in that case they end up using 'tempered' versions of the processes, where the tails decay exponentially so as to make the second moment finite. So the standard seems to be that the second moment must be finite. But why is this so? Is it just for tractability of the model or do they believe that finite second moment is an empirical fact?

Additional discussion:

In this paper Taleb explores the possibility of constructing a risk-neutral measure in an infinite-variance setting. As he mentions, this destroys the dynamic-hedging theory, but in practice it does not make a difference.

by Slungpue at July 02, 2015 04:43 PM

question on mvo

I am trying to replicate this mvo objective function using quadprog

mu(w-b) - lambda*(w-b)S(w-b) where sum(w-b) = 0 and -b

now if i do the follownig where i substitute

w_tilda for w-b so I can solve for w_tilda and place it in the form of the function in quadprog and i do the

quadprog(ra*2*S, -mu, [], [], ones(size(mu)), 0, -b, 1-b,[]), I get some solution

This gives me something different from if i expand the expression

w_tilda = w-b

w = w_tilda + b

mu(w_tilda + b) - lambda(w_tilda+b)S*(w_tilda+b) this solves into

mu(w_tilda) - 2*lambda*w_tilda'Sb - lambdaw_tildaS*w_tilda'

So we solve for

quadprog(2*lambda*S, -mu + 2*lamda*w_tilda'Sb, ones(size(mu)), 0, -b, 1-b,[])

why does this give me a different optimized result than the first expression.not sure what i am doing wrong

by qfd at July 02, 2015 04:40 PM

CompsciOverflow

Why decision problem definition ignores Gödel incompleteness theorem?

The following question assume that the decision problem definition (syntactic) has been written (and could be changed if it isn't able) to catch a concept (meaning, semantic) which has both nice implications and some models. So please don't answer "the definition is the definition".

Let be any usual proof system, let $x$ be an object and let $P_{x}$ be a property. By Gödel incompleteness theorem, there are 3 cases :

  1. $P_x$ is true and there is a proof for this.
  2. $P_x$ is false and there is a proof for this.
  3. $P_x$ is unprovable and there is neither a proof of trueness nor a proof of falseness.

But, regardless those 3 cases, a decision problem is defined by its set of positive instances (which hasn't even to match with 1.). Consequently, we must give truth value (positiveness/negativeness) to statements which don't have any in our proof system, I know that such a completion isn't inconsistent but it doesn't make much sense in my opinion.

Defining a decision problem constructivly, the decision problem would be just the target property, would make more sense. What could be the problems of such an approach ?

by François Godi at July 02, 2015 04:39 PM

StackOverflow

Type mismatch, found Unit, required Int. Using pattern matching, Scala with Java libraries

I'm trying to register some udfs (user defined functions) for spark sql and when I try to compile I get a type mismatch error for the following code:

csc.udf.register("DATEADD", (datePart: String, number: Int, date: Timestamp) => {
    val time: Calendar = Calendar.getInstance
    datePart match {
        case "Y" => time.add(Calendar.YEAR, number)
        case "M" => time.add(Calendar.MONTH, number)
        case "D" => time.add(Calendar.DATE, number)
        case "h" => time.add(Calendar.HOUR, number)
        case "m" => time.add(Calendar.MINUTE, number)
        case "s" => time.add(Calendar.SECOND, number)
        case _ => 0
    }
}: Int)

Here is the error I'm getting:

[error] /vagrant/SQLJob/src/main/scala/sqljob/context/CassandraSQLContextFactory.scala:111: type mismatch;
[error]  found   : Unit
[error]  required: Int
[error]                                 case "Y" => time.add(Calendar.YEAR: Int, number)
[error]                                                     ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 7 s, completed Jul 2, 2015 4:19:29 PM

The Calendar class is from java.util.Calendar

Timestamp if from java.sql.Timestamp

by vicg at July 02, 2015 04:37 PM

Unable to access java fields in clojure

I'm having an issue where I can see the fields of a Java class/object, but I can't actually access them. I can see the fields in two ways. Using this following code.

=>(require '[clojure.reflect :as r])    
=>(use '[clojure.pprint :only [print-table]])    
=>(print-table (sort-by :name (:members (r/reflect  myClass))))

And also, by creating an instance of the object. Let's say the fields are an int denoted a, a string denoted word , and a String ArrayList denoted mylist.

=>myObj
#<myClass 1 hello [world]>

In both cases, I can see that these fields exist. However, when I run the following code, I get the following error.

=>(. myObj mylist)
IllegalArgumentException No matching field found: mylist for class myClass clojure.lang.Reflector.getInstanceField (Reflector.java:271)

Does anyone have any idea what is going on?


In response to Nicolas Modrzyk's answer, I run (.-myFieild myObject) and get IllegalArgumentException No matching field found: myField for class myClass clojure.lang.Reflector.invokeNoArgInstanceMember (Reflector.java:308)

Additionally, these fields are not private. I have the Java source code in front of me.

by Jordan at July 02, 2015 04:27 PM

/r/emacs

Keeping it all in one directory

I would like to keep all my files, init etc. in the emacs directory itself. I have limited rights on some of the Windows boxes I am on and it is just easier for me to do it that way since I can create a folder and create shortcuts myself.

Thanks.

submitted by sigzero
[link] [2 comments]

July 02, 2015 04:24 PM

StackOverflow

Haskell - Checking if all list elements are unique

I need to compare if all elements of a given list are unique. (For the record I am doing so for academic purposes.)

Here is what I have thus far:

allDifferent :: (Eq a) => [a] -> Bool
allDifferent list = case list of
    []      -> True
    (x:xs)  -> if x `elem` xs then False else allDifferent xs

Which works wonderfully!

Now, when I try to do it like this...

allDifferent2 :: (Eq a) => [a] -> Bool
allDifferent2 list
    | null list                                                     = True        
    | (head list) `elem` (tail list) || allDifferent2 (tail list)  = False
    | otherwise  

It just doesn't work as intended. I get the following output from GHCi:

*Main> allDifferent2 [1..4]
False
*Main> allDifferent2 [1..5]
True
*Main> allDifferent2 [1..6]
False
*Main> allDifferent2 [1..7]
True

i.e. For every list with an even amount of elements it outputs False and for an odd amount of elements, True.

What am I missing? Would anyone care to shine some light?

by Luis Dos Reis at July 02, 2015 04:23 PM

/r/netsec

/r/compsci

StackOverflow

Bacon.retry not retrying on 'Access-Control-Allow-Origin' errors

I have a buggy Web service that sporadically sends a 500-error "XMLHttpRequest cannot load http://54.175.3.41:3124/solve. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://local.xxx.me:8080' is therefore not allowed access. The response had HTTP status code 500."

I use Bacon.retry to wrap the ajax call. When it fails, it'll just retry. However, what I notice is that the stream won't produce a value when the server fails. It's as if Bacon.retry doesn't retry (which is in fact what's happening, when I look under the hood in the dev console).

I'm using BaconJS 0.7.65.

The observable Bacon.retry looks like this:

  var ajaxRequest = Bacon.fromPromise($.ajax(//...));

  var observable = Bacon.retry({
      source: function() { return ajaxRequest; },
      retries: 50,
      delay: function() { return 100; }
    });

The code that calls the observable looks like this:

 stream.flatMap(function(valuesOrObservables) {
      return Bacon.fromArray(valuesOrObservables)
         .flatMapConcat(function(valueOrObservable) {

           switch(valueOrObservable.type) { //we calculate this beforehand
              case 'value' :
                return valueOrObservable.value;
              case 'observable' :
                return Bacon.fromArray(valueOrObservable.observables)
                            .flatMapConcat(function(obs) { return obs; })
           }
         })
 })

Observations:

  1. if I add an error handler to the observable, it still does not work.
  2. for some reason, #retry is called 50 times even when it succeeds.

by U Avalos at July 02, 2015 04:09 PM

/r/compsci

Have there been any breakthroughs in Turing complete visual languages for games so gamers can add their own modes, levels, objects, etc.?

I am particularly interested in visual languages where the visual program looks a lot like what the game looks like.

submitted by amichail
[link] [9 comments]

July 02, 2015 04:07 PM

CompsciOverflow

SimRank on a weighted directed graph (how to calculate node similarity)

I have a weighted directed graph (it's sparse, 35,000 nodes and 19 million edges) and would like to calculate similarity scores for pairs of nodes. SimRank would be ideal for this purpose, except that it applies to unweighted graphs.

It's easy to adapt the matrix representation of SimRank to use weighted edges:

$\mathbf{S} = \max\{C \cdot (\mathbf{A}^T \cdot \mathbf{S} \cdot \mathbf{A}),\mathbf{I}\}$

Just replace the entries in the adjacency matrix $\mathbf{A}$ with the edge weights. But I don't see how to derive the "iteration to a fixed-point" method from this formula.

$s_{k+1}(a, b) = \frac{C}{\left|I(a)\right|\left|I(b)\right|} \cdot \sum_{i \in I(a)} \sum_{j \in I(b)} s_{k}(i, j)$

Is the following method correct?

$s_{k+1}(a, b) = C \cdot \sum_{i \in I(a)} \sum_{j \in I(b)} s_{k}(i, j) \cdot w(i, a) \cdot w(j, b)$

My reasoning is that the original method is equivalent to this:

$s_{k+1}(a, b) = C \cdot \sum_{i \in I(a)} \sum_{j \in I(b)} s_{k}(i, j) \cdot \frac{1}{\left|I(a)\right|} \cdot \frac{1}{\left|I(b)\right|}$

so I replaced the trivially equal weights with the actual ones I have. Confirmation would be nice, though.

If there is a superior method to SimRank, I would like to hear about it. (I'm concerned with improvements to accuracy, not performance; this is a small data set.)

Edit: A different similarity metric that's nearly as accurate but performs a lot better would actually be useful too. My initial tests of SimRank are very slow, and the potential optimizations are not simple to code.

by Remy at July 02, 2015 04:04 PM

Fefe

Es ist soweit, die Bundestags-IT wird neu aufgesetzt. ...

Es ist soweit, die Bundestags-IT wird neu aufgesetzt. Richtig so. Weiterwurschteln ist keine Lösung bei sowas.

July 02, 2015 04:01 PM

/r/scala

DataTau

/r/scala

QuantOverflow

Stochastic Calculus Rescale Exercise

I have the following system of SDE's

$ dA_t = \kappa_A(\bar{A}-A_t)dt + \sigma_A \sqrt{B_t}dW^A_t \\ dB_t = \kappa_B(\bar{B} - B_t)dt + \sigma_B \sqrt{B_t}dW^B_t $

If $\sigma_B > \sigma_A$ I would consider the volatility $B_t$ to be more volatile than $A_t$ because

$ d\langle A_\bullet\rangle_t = \sigma_A^2 B_t dt$ and $ d\langle B_\bullet\rangle_t = \sigma_B^2 B_t dt$

Now, if I rescale the process $B$ by $\sigma_A^2$ and define $\sigma_A^2B =\tilde{B}$, I get the an equivalent system of SDE's

$ dA_t = \kappa_A(\bar{A}-A_t)dt + \sqrt{\tilde{B}_t}dW^A_t \\ d\tilde{B}_t = \kappa_B(\sigma_A^2\bar{B} - \tilde{B}_t)dt + \sigma_A\sigma_B \sqrt{\tilde{B}_t}dW^B_t $

But now the claim "If $\sigma_B > \sigma_A$ I would consider the volatility $\tilde{B}_t$ to be more volatile than $A_t$" does not hold anymore. Consider $1>\sigma_B>\sigma_A$ and

$ d\langle A_\bullet\rangle_t = \tilde{B}_t dt$ and $ d\langle \tilde{B}_\bullet\rangle_t = \sigma_A^2\sigma_B^2 \tilde{B}_t dt$.

In this case the volatility $\tilde{B}$ of $A$ is more volatile than $A$ only if $\sigma_A^2\sigma_B^2>1$, which is completely different from the condition above ($\sigma_B > \sigma_A$).

What went wrong? Is there some error in the rescalling?

by Phun at July 02, 2015 03:59 PM

Historical Daily NAV for Closed End Funds

Does anyone know where to get historical, daily net asset values for closed end funds from the date of inception to the present? Yahoo finance has daily opening, high, low, closing values, but no NAV that I'm aware of through their YQL service (querying yahoo.finance.historicaldata).

At a minimum, I would like to know the NAV of a fund on the date the dividend is paid.

by mbmast at July 02, 2015 03:55 PM

Dave Winer

riverToTweets.js

This is pretty technical stuff, but it's indicative of the kind of work we're doing with River4.

https://github.com/scripting/riverToTweets

It's a publishing system. I don't think there's anything else like it in the open source world.

Something we should be connecting on with journalists. I could easily teach smart users how to set up a river of news.

They would instantly be more powerful than anyone else in news!

July 02, 2015 03:54 PM

StackOverflow

RDD.sortByKey using a function in python?

Lets say my key is not a simple data type but a class, and I need to sort the keys by using a comparison function. In Scala, I can do this by using, new Ordering. How can I achieve the same functionality in Python? For example, what would be the equivalent code in Python?

implicit val someClassOrdering = new Ordering[SomeClass] {
        override def compare(a: SomeClass, b: SomeClass) = a.compare(b)
    }

by MetallicPriest at July 02, 2015 03:49 PM

CompsciOverflow

Starting point and references for studying computer networks

I'm trying to determine where I should begin studying in order to eventually have a solid understanding of how computer networks work. I majored in mathematics in college. Here's a list of some of my past course work:

Calc I, II, III, IV

Physics I, II

Overview of Computer Science

Programming in C++

Intro to Partial Differential Equations

Analysis I

Probability and Statistics I, II

Algebra I, II

Intro to cryptology and number theory

Intro to differential Geometry

Intro to algebraic geometry

Point set topology

Algebraic topology I ----

I've done some research and have tried to get started on my own but I think it would be best if I consulted with someone who actually knows about computer networks.

"Basics of Computer Networking" by Thomas Robertazzi seems like a good introductory source of "high-level" concepts but doesn't show anything about how to set up and secure a computer network.

"802.11 Wireless Networks" by Alan Holt and Chi-Yu Huang is another book I was considering reading at some point but I'm not too sure if that's where I should start out.

I downloaded the MIT Open Course Ware for the course "Computer Networks" course number 6.829 and am reading the first set of lecture notes. I might be able to work through the content but I can't tell yet.

I'm thinking maybe I should download and study the Open Course Ware materials for Introduction to Electrical Engineering and Computer Science.

Could someone please point me in the right direction? Thank you.

by Itried at July 02, 2015 03:47 PM

StackOverflow

What should we call an object declared in a clojure prog?

When we talk about a clojure (or other lisp) prog, what should we say when refer an object declared in it? For example:

(let [a ...

Of course, if a is a function, we say function a, but what should we say when it's not a function? Form? Data? Symbol? Literal?

Generally, we say those things in other prog language variables or objects.

by xando at July 02, 2015 03:46 PM

Lobsters

CompsciOverflow

Path in digraph passing through given set of vertices

Suppose we have digraph G, set of its vertices W and two (possibly equal) vertices s and f. I'm looking for an algorithm which will solve the following problem: whether there is path from s to f passing through all vertices from W; if yes, return any of such paths. By "path" i mean path in the most general sense: repetitions of edges and vertices are allowed.

Unfortunately no information about graph is known: it needn't be connected in any sense, it might have oriented cycles, etc.

My idea was to modificate DFS somehow but i didn't manage to proceed. Any ideas and hints will be highly appreciated. Thanks in advance.

by Igor at July 02, 2015 03:45 PM

Lobsters

CompsciOverflow

Symmetric Difference of Turing Recognizable and Finite Languages

Let A be a Turing Recognizable Language and B a finite Language. I want to prove that their symmetric difference is Turing Recognizable.

My reasoning: B is finite, therefore the finite number of strings can be put through a DTM , M, such that L(M)= A. If M accepts, then remove that string from B and put it in a list, R. * Otherwise, leave it in B. The DTM for the symmetric difference just checks if it is either in B, which it accepts, R which it rejects or if M accepts.

I feel like I have simplified this somewhere, but I would appreciate any pointer in the right direction.

I am unsure how to determine the strings in both A and B. The way I have done it (above) would run forever, because if it is not in A and is in B, it would never halt. This would result in never getting past the * above.

How can it be determined if a string is in A and B?

by csonq at July 02, 2015 03:33 PM

StackOverflow

Recursively copy /dev/null to file pattern

I am using a ZFS system that has reached its disk quota. As a result, the rm command no longer is available to me. However, I am able to copy /dev/null to particular files to effectively remove them.

The issue is that the directory I am trying to clear out is roughly 40GB in size, and each file that has any girth is buried within 3 levels.

Is there a command line way to easily search and replace those files with /dev/null? I could do it pretty easily in Python/Perl, however I want to try via the command line.

by espais at July 02, 2015 03:20 PM

Cannot Resolve Implicit for Unmarshalling

Given this simple spray app, which tries to get a response of Foo back.

object Main extends App with SimpleRoutingApp {

  implicit val system = ActorSystem("my-system")

  case class Foo(x: String) 
  implicit val format = jsonFormat1(Foo)

  val pipeline: HttpRequest => Future[Foo] = sendReceive ~> unmarshal[Foo]

  val LOGIN_URL = "foo.bar.com/login"

  startServer(interface = "localhost", port = 8080) {
    path("go") {
      get {
        complete {
          val req = Post(LOGIN_URL)
          pipeline(req).recoverWith[Foo]{ case _ => Future { Foo("foo") } }
        }
      }
    }    
  }
}

I'm getting the following compile-time error:

[error] Main.scala:19: could not find implicit value for evidence parameter of type spray.httpx.unmarshalling.FromResponseUnmarshaller[net.Main.Foo]
[error]   val pipeline: HttpRequest => Future[Foo] = sendReceive ~> unmarshal[Foo]
[error]                                                                      ^
[error] Main.scala:35: type mismatch;
[error]  found   : scala.concurrent.Future[net.Main.Foo]
[error]  required: spray.httpx.marshalling.ToResponseMarshallable
[error]           pipeline(req).recoverWith[Foo]{ case _ => Future { Foo("foo") } }
[error]                                         ^

I had expected that the implicit val format = ... would've provided an implicit Format for Foo.

How can I fix these errors?

by Kevin Meredith at July 02, 2015 03:18 PM

/r/scala

StackOverflow

ReactiveMongo: Unable to remove document from array using Reactive Mongo Query DSL

I am trying to remove document from array using reactive mongo Query DSL. My document structure is as below:

{
"_id" : ObjectId("55950666c3ad8c860b5141cc"),
"name" : "Texla",
-------------------------
 "location" : {
    "_id" : ObjectId("5595074bc3ad8c19105141d0"),
    "companyFieldId" : ObjectId("559140f1ea16e552ac90e058"),
    "name" : "Location",
    "alias" : "Site",
    "locations" : [
        {
            "_id" : ObjectId("5595074bc3ad8c19105141d1"),
            "country" : "India",
            "state" : "Punjab",
            "city" : "Moga",
            "zip" : "142001",
            "custom" : false
        },
        {
            "_id" : ObjectId("5595074bc3ad8c19105141d2"),
            "country" : "India da address",
            "custom" : true
        }
    ]
},
------------------------
}

My query using "Reactive Mongo Extensions" is :

genericCompanyRepository.update($doc("_id" $eq BsonObjectId.apply("55950666c3ad8c860b5141cc")), $pull("location.locations" $elemMatch("_id" $eq BsonObjectId.apply("5595074bc3ad8c19105141d1"))), GetLastError(), false, false);

With above update statement. there is not document is pull from array. What is the problem with Query ?

When i run below function, this will work.

genericCompanyRepository.findOne($doc("location.locations" $elemMatch("_id" $eq BsonObjectId.apply("5595074bc3ad8c19105141d1")))) function, the element is return. 

by Harmeet Singh Taara at July 02, 2015 03:14 PM

How to combine Lists of an Objects inside another List of Objects in Scala

I've tried working this out and think "flatten" might be part of my solution but I just can't work it out.

Imagine:

case class Thing (value1: Int, value2: Int)
case class Container (string1: String, listOfThings: List[Thing], string2: String)

So my list:

List[Container]

could be any size but for now we'll just have 3.

Inside each Container there is a list

listofthings[Thing]

that could also have number of type Thing in it, for now we'll also just have 3.

So what I want to get is something like

fullListOfThings[Thing] = List(Thing(1,1), Thing(1,2), Thing(1,3),
    Thing(2,1), Thing(2,2), Thing(2,3), Thing(3,1), Thing(3,2), Thing(3,3))

The first value in Thing being it's Container number and the second value being the Thing number in that Container.

I hope all this makes sense.

To make it more complicated for me, my list of Container is not actually a list but rather an RDD,

RDD rddOfContainers[Container]

and what I need at the end is an RDD of Things

fullRddOfThings[Thing]

In the Java that I am more used to this would be pretty straight forward but Scala is different. I'm pretty new to Scala and am having to learn this on the fly so any full explanation would be very welcome.

I want to avoid bringing in too much external libraries if I can. In the mean time I'll keep reading. Thanks

by Roy Wood at July 02, 2015 03:02 PM

Fefe

Benutzt hier schon jemand Windows 10? Da gibt es ein ...

Benutzt hier schon jemand Windows 10? Da gibt es ein neues Feature namens Wifi Sense, vor das ich hier mal ausdrücklich warnen will. Das Feature ist sozusagen Kommunismus für WLAN. Wenn ihr einen Facebook-Freund besucht, dann kommt ihr automatisch in deren WLAN rein, wenn der das aktiviert hat. Dafür müsst ihr das aber auch aktiviert haben, und damit kommen umgekehrt eure Facebook-Freunde in euer WLAN rein. Das Feature gibt es wohl auf Windows Phone schon länger.

Klingt ja auch wie eine gute Idee, bis man sich mal die juristischen Implikationen in einem vom Landgericht Hamburg ins Störerhaftungs-Pleistozän zurückgebombte Deutschland überlegt.

Microsoft enables Windows 10's Wi-Fi Sense by default, and access to password-protected networks are shared with contacts unless the user remembers to uncheck a box when they first connect.

Update: Auch an Berliner Schulen, in mindestens einer öffentlichen Verwaltung und die Rentenversicherungen setzen sowas anscheinend ein. Wie gruselig! Schade, dass wir keine Parteien haben, die sich um Bürgerrechte kümmern, sonst würde man das gesetzlich verbieten. Fernmeldegeheimnis und so. Das kann ja wohl nicht sein, dass eine Organisation erfolgreich argumentieren kann, das Fernmeldegeheimnis sei nicht verletzt, weil da ja kein Mensch sondern nur eine Software mitliest. Das wäre ja wie wenn man behauptet, Videoüberwachung sei nicht schlimm, weil das eine Kamera und kein Mensch ist.

July 02, 2015 03:01 PM

Hier ging per Mail der Hinweis ein, dass die Bundeswehr ...

Hier ging per Mail der Hinweis ein, dass die Bundeswehr jetzt SSL-Verbindungen der Soldaten ins Internet abfängt und mit Fake-Zertifikat reinguckt. Die offizielle Begründung ist natürlich, nach Malware zu gucken.

Ich weiß, dass das auch in Gefängnissen so gemacht wird.

Ich befürchte, dass wir es hier mit einem Vorläufer der nächsten Kryptokriege zu tun haben. Das schrittweise die Bedeutung von Menschenrechten wie dem Briefgeheimnis immer weiter abgeschwächt werden, genau wie die NSA-Enthüllungen ja das Gefühl abgeschwächt haben, dass man sich im Internet ungestört unterhalten kann (das war übrigens noch nie gerechtfertigt, das Gefühl).

Verschlüsselung ist unsere letzte Verteidigungslinie. Wir haben uns schon von dei meisten anderen Prinzipien der Aufklärung verabschiedet.

Wir lassen unsere Nachbarn nicht in unser Internet, weil wir uns daran gewöhnt haben, dass Internet Geld kostet und das Haftungsfragen aufwirft. Ja warum eigentlich?

Wir haben uns von der Idee verabschiedet, dass man sich im Netz bewegen kann, ohne abgehört zu werden. Ja warum eigentlich?

Und jetzt versuchen sie, uns der Reihe nach abzugewöhnen, dass wir ein Recht auf Verschlüsselung haben. Erst in so Kontrollhochburgen und Randgebieten wie Gefängnissen und der Bundeswehr, dann werden sie der Reihe nach alle anderen Argumente durchprobieren. Terrorismus, organisierte Kriminalität, Kindesmissbrauch, Jugendschutz, und am Ende wird die Contentmafia in eure SSL-Verbindungen reingucken wollen, um zu prüfen, dass ihr keine Musik raubkopiert.

Ich möchte betonen, dass es keinen weiteren Rückzugsraum gibt. Wenn wir uns die Verschlüsselung wegnehmen lassen, dann können wir das Internet auch gleich wieder zumachen. Und ich rede hier von Ende-zu-Ende-Verschlüsselung, denn alles andere ist Schlangenöl. Das haben sie ja auch schon versucht, uns das als "das ist doch auch verschlüsselt dann, das reicht doch!1!!" zu verkaufen. Fallt auf sowas nicht rein.

Achtet da mal drauf. Ich sage voraus, dass das in nächster Zeit zunehmen wird, dass öffentlich "Probleme" thematisiert werden, die wir haben, weil jemand verschlüsselt hat. "Diesen Terroristen konnten wir nicht aufhalten, denn er hat seine Festplatte verschlüsselt". Das gab es schon mal zaghaft im Zusammenhang mit Skype, aber das fiel ja gleich wieder in sich zusammen, als wir darauf hinwiesen, dass Skype natürlich gesetzeskonform Abhörschnittstellen anbietet.

Update: Kommentar per Mail:

was Du unter http://blog.fefe.de/?ts=ab6b87e5 bei der Bundeswehr ansprichst, ist für Hamburger Schulen seit einiger Zeit normal.
Der feuchte Traum der bis zum Anschlag im Arsch der Contentmafia steckenden Schulbehörde ist es, jederzeit kontrollieren zu können, wer was im Internet macht (Lehrer, Schüler, etc.).
Die zu diesem Zweck ausgerollte "Lösung" nennt sich "Time for Kids" "Schulrouter Plus" und ist insgesamt ein "Internet minus", etwas derartig zusammengestricktes habe ich bisher noch nicht gesehen. Dass Sachkenntnis in diesem Bereich gerne von Stock-Fotos und bunten Flyern ersetzt wird, versteht sich von selbst...

Wobei ich ja sagen muss, dass ich das in Hamburg durchaus verstehen kann. Da ist man ja quasi direkt in der Nachbarschaft des LG Hamburg.

Update: Noch einer:

nicht nur die Bundeswehr macht das, sondern auch das Arbeitsamt. Für die habe ich mal ein Jahr lang Tech Support gemacht. Da sitzt dann ein Squid mit nem selbstgepopelten root certificate davor, was auf allen Rechnern verteilt ist. Zumindest im IE funktioniert das. Man hatte auch mal versucht, da Firefox einzuführen, aber der blockt, sobald er etwas nicht erkennt.

July 02, 2015 03:01 PM

Noch eine Email zum Poststreik (diesmal aus Augsburg):Aussage ...

Noch eine Email zum Poststreik (diesmal aus Augsburg):
Aussage vom DHL Fahrer (er hatte 4 Tage lang gestreikt): "Ich streike nie wieder, denn jetzt muß ich alles das in 4 Tagen liegengeblieben ist noch zusätzlich zustellen"

Aussage vom DPD Fahrer, der privat auf eine DHL Lieferung wartet: "Im Moment liegen 8000 Pakete im Verteilzentrum Gersthofen/Augsburg. Den Rückstand aufarbeiten dauert bis zu 6 Monate"

Ein Fahrer hat pro Tag 100-300 Pakete zum zustellen (wobei 300 der Spitzenwert von Weihnachten ist).

Update: Und noch eine Email zum Poststreik:

Noch was von "meinem" Zusteller, der verbeamtet natürlich nicht streikt: Die streikenden Kollegen können sich drauf gefaßt machen, daß sie demnächst die unbeliebten Jobs und Schichten machen müssen, also Springerdienste u.ä. Zu den Postlern mit Zeitverträgen bei der Post und nicht bei den Billigsubunternehmen weiß er, daß einige bereits mit der Zusage auf einen unbefristeten Vertrag ins "gehaltsoptimierte" Subunternehmen gelockt wurden, also weniger Gehalt bei geringfügig erhöhter Arbeitsplatz"sicherheit". Na dann ist ja alles gut :-(

Dann kann man sein Paket zukünftig sogar ohne schlechtes Gewissen mit Hermes versenden.

July 02, 2015 03:01 PM

Ich habe leider keine schriftliche Quelle für das ...

Ich habe leider keine schriftliche Quelle für das folgende, aber die kommt hoffentlich noch. Irgendein Presseorgan wird da ja wohl hoffentlich drüber berichten. Es geht um den Fall Josef S., ihr erinnert euch, der Student aus Jena, der in Österreich zwischen die Mühlen der Justiz geriet.
Hallo Fefe,

heute gab es die Berufungsverhandlung in Wien und das Urteil aus der ersten Instanz wurde bestärkt (sogar noch als zu niedrig angesehen).

Richter: "Dass Sie anderswo friedlich demonstriert haben, sagt nichts aus. Allein, dass Sie aus dem Ausland einreisen, dass Sie sich auch durch Ihre Kleidung in den Schwarzen Block eingefügt haben - wenn Sie eine solche Verhaltensweise an den Tag legen, dann ist mir das zu wenig, wenn Sie sagen Sie demonstrieren anderswo friedlich."

Der Rest ist aber online:
Die Botschaft des Urteils an Demonstranten sei: "Vorsicht: Wer auf eine linke Demonstration geht, könnte am Ende im Gefängnis landen", sagte Anwalt Clemens Lahner der Deutschen Presse-Agentur. "Die Entscheidung ist bitter." Er monierte, dass sich das Urteil nur auf einen Belastungszeugen stütze, dessen Aussagen widersprüchlich seien. Zwar gebe es theoretisch die Möglichkeit, den Europäischen Gerichtshof für Menschenrechte (EGMR) anzurufen, sagte Lahner. Dies sei aber derzeit nicht geplant.
Innerhalb Österreichs ist die Berufungskette ausgeschöpft, das Urteil rechtskräftig.

Ich muss wirklich sagen, dass ich da von der österreichischen Justiz einigermaßen schockiert bin. Ich hätte nicht gedacht, dass es solche Urteile in Österreich geben kann, die mich eher an eine Hexenjagd als an ein faires Verfahren erinnern. Sehr, sehr ernüchternd.

Update: Jetzt gibt es eine verlinkbare Quelle zu dem oberen Zitat: den Liveticker vom Standard.

July 02, 2015 03:01 PM

Old and busted: Bad banks.New hotness: EZB kauft Anleihen ...

Old and busted: Bad banks.

New hotness: EZB kauft Anleihen von Bad Companies.

QE = Quantitative Easing, das "wir verleihen jetzt solange Geld für praktisch keine Zinsen, bis es der Wirtschaft wieder gut geht"-Programm. Konkret geht es um einige Ex-Monopolisten aus dem Energiesektor aus Italien, Enel, Snam und Terna. Enel ist Italiens größter Stromversorger, Snam ist Gasnetzbetreiber, Terna betreibt 90% des Italienischen Stromnetzes. Hmm, warte mal, war nicht der Chef der EZB gerade ein Italiener? Oh ja, Mario Draghi! Na SO ein Zufall!

July 02, 2015 03:01 PM

StackOverflow

Casting a Scala Object into a Generic type?

Firstly, some background:

I'm currently writing a generic solver for a certain class of problems (specifically, a structural SVM solver) in Scala. In order to use this, the user has to implement an interface.

Consider a simplified version of the interface:

trait HelperFunctions[X, Y] {
    def func1(x: X, y: Y): Y
    def func2(y1: Y, y2: Y): Int
}

and a simple implementation:

object ImplFunctions extends HelperFunctions[Vector[Double], Double] {
    def func1(x: Vector[Double], y: Double): Double  = { ... }
    def func2(y1: Double, y2: Double): Int = { .. }
}

(Note that the implementation has to be provided in form of an object.)

Now, the problem: I need to write a diagnostic suite which aids the user to verify the sanity of his functions. An example sanity test would involve something like func1(..) being positive for all x and y. For this, I set out by writing a BDD-style unit-tests using ScalaTest.

Note that the diagnostic suite should be generic, to work with any object which extends HelperFunctions[X, Y]. A high-level picture of how I planned to go about this: first providing all the sanity tests designed for generic X and Y. Then, the user merely replaces, say a ???, with a ImplFunctions and runs the suite.

But, turns out I didn't find an elegant approach to treat ImplFunctions as a generic HelperFunctions[X, Y]. Here's a taste of what I've tried so far:

import org.scalatest._

class ApplicationDiag extends FlatSpec {

  // Option 1:
  // Cast the ImplFunctions into a generic type
  // ERROR: type mismatch; found : ImplFunctions.type, required : HelperFunctions[X, Y]
  val helpers: HelperFunctions[X, Y] = ImplFunctions
  // OR
  def helpers[X, Y](): HelperFunctions[X, Y] = ImplFunctions


  // Option 2: User fills up the right side, no type specified for the value
  // WORKS. But, cannot explicitly state types
  val helpers = ImplFunctions

  "func1" should "be positive" in {
    // ... the check as described previously
   }

}

Due to the covariance nature, I imagine it should be possible to cast an instance of a type to its supertype. But, it seems to be tougher since this involves generics. Is there a clean way to do this?

Note: 1. I'm totally open to alternate designs for the diagnostic suite. 2. The real-world interface: HelperFunctions and a real-world application: ImplFunctions

by Tribhuvanesh at July 02, 2015 02:56 PM

How to catch exception in future in play framework 2.4

I'm trying to figure out how to catch an exception from within a future in a function being called by an asynchronous action in Play Framework 2.4. However, the code I've got using recover never seems to get executed - I always get an Execution exception page rather than an Ok response.

The action code is:

def index = Action.async {
    cardRepo.getAll()
    .map {
      cards => Ok(views.html.cardlist(cards))
    }.recover{
      case e: Exception => Ok(e.getMessage)
    }
  }

The code in cardRepo.getAll (that I've hard-coded a throw new Exception for experimenting) is:

def getAll(): Future[Seq[Card]] = {

    implicit val cardFormat = Json.format[Card]

    val cards = collection.find(Json.obj())
      .cursor[Card]()
      .collect[Seq]()

    throw new Exception("OH DEAR")

    cards
  }

I've seen similar questions on Stack Overflow but I can't see what I'm doing wrong.

by MarkJ at July 02, 2015 02:50 PM

Planet Emacsen

Irreal: Emacs and Data Science

Robert Vesco has an interesting post on why he uses Emacs in his data science work. Vesco is a data scientist for Bloomberg so he's a serious practitioner of the art. Working in data science means he uses a variety of languages such as Python, R, SQL, Stata, and SAS. He notes that most of those languages have an associated IDE that simplifies working with them but that that means learning multiple editors and probably mastering none. He also notes that those specialized IDEs may fall out of favor and not be supported in the future, that they are not portable across platforms, and that they are hard to customize.

Happily, those defects do not apply to Emacs. It runs on essentially every (serious) platform, is open source, will be supported for as long as there are a few programmers still interested in using it, and, of course, is famously customizable. One consequence of that customizability is it can become a reasonable IDE for almost any language. That means that a single tool can be used for all those languages and that it's worthwhile mastering that tool because it's the only (editing) tool you need to learn.

The bulk of Vesco's post covers those features of Emacs that he finds most useful in his work. One of those features is Org mode that allows him to use reproducible research methods in his research and publishing. It's an interesting read even if you're not a data scientist.

by jcs at July 02, 2015 02:46 PM

StackOverflow

Decorating a Scala Play Controller with Java class Secured extends Security.Authenticator

I'm refactoring a Play 2.3 app in Java to Scala. Existing Java controllers are decorated like so for authentication.

@Security.Authenticated(Secured.class) public class Application extends Controller { ... }

The signature of Secured.java is:

public class Secured extends Security.Authenticator { ... }

How might I decorate my Scala controller with the same Secured.java?

I've tried not doing that by writing a second Secured2.scala as a trait and doing authentication the Scala way in Play but many of the existing templates rely on Secured.java to get the current user so that's why I'm trying to make my Scala controller compatible with the Java Secured class.

by Paul Lam at July 02, 2015 02:45 PM

How to force lein deps to re-fetch local jars/libs

using the following instructions:

http://www.pgrs.net/2011/10/30/using-local-jars-with-leiningen/

I installed some local jars into local repository.

When I want to update the jar in my project, I re-install the jar into the repository and then run lein deps. I am finding that somehow, the jar is not updated in my project. Even when I rm -rf everything in the libs folder, the new jar is not picked up. The only way I have been able to get this to work is to change the name of the jar.

Its sort of odd because this occurs even when I have deleted all traces of the old jar (as far as I know) -- does lein hide a snapshot/cache of libs?

by hiroprotagonist at July 02, 2015 02:31 PM

Reusing part of Stream mapping and filtering to compose two different results

I'd like to know if there is a good way of reusing a common stream operation that varies in the end for different outputs. The example bellow is exactly what I'm trying to compact into a one-step operation:

public static DepartmentInfo extractDepartmentInfo(BaselinePolicy resource) throws ResourceProcessorError {
    Function<Exception, Exception> rpe = e -> new ResourceProcessorError(e.getMessage());
    List<String> parents = 
        Objects.requireNonNull(
            Exceptions.trying(
                () -> Arrays.asList(Exceptions.dangerous(resource::getParentIds).expecting(CMException.class).throwing(rpe))
                            .stream()
                            .map(cId -> Exceptions.dangerous(cId, resource.getCMServer()::getPolicy).expecting(CMException.class).throwing(rpe))
                            .filter(policy -> PagePolicy.class.isAssignableFrom(policy.getClass()))
                            .map(PagePolicy.class::cast)
                            .filter(page -> Exceptions.dangerous(page,
                                                                 p -> Boolean.valueOf(p.getComponentNotNull(ComponentConstants.POLOPOLY_CLIENT, 
                                                                                                            ComponentConstants.IS_HOME_DEPARTMENT,
                                                                                                            Boolean.FALSE.toString())).booleanValue())
                                                      .expecting(CMException.class).throwing(rpe))
                            .map(page -> Exceptions.dangerous(page, p -> p.getExternalId().getExternalId()).expecting(CMException.class).throwing(rpe)), ResourceProcessorError.class)
                            .collect(Collectors.toList()));
    String externalId = parents.get(parents.size()-1).toString();
    List<String> list = 
        Objects.requireNonNull(
            Exceptions.trying(
                () -> Arrays.asList(Exceptions.dangerous(resource::getParentIds).expecting(CMException.class).throwing(rpe))
                            .stream()
                            .map(cId -> Exceptions.dangerous(cId, resource.getCMServer()::getPolicy).expecting(CMException.class).throwing(rpe))
                            .filter(policy -> PagePolicy.class.isAssignableFrom(policy.getClass()))
                            .map(PagePolicy.class::cast)
                            .map(page -> 
                                Exceptions.dangerous(page, 
                                        p -> p.getChildPolicy(PATH_SEGMENT) != null && 
                                             StringUtils.hasLength(SingleValued.class.cast(p.getChildPolicy(PATH_SEGMENT)).getValue())? 
                                             SingleValued.class.cast(p.getChildPolicy(PATH_SEGMENT)).getValue(): p.getName()).expecting(CMException.class).throwing(rpe))
                            .filter(val -> val != null && !val.isEmpty()), ResourceProcessorError.class)
                            .collect(Collectors.toList()));
    if(list.size() > 3) {
        list = list.subList(list.size() - 3, list.size()-1);
    }
    switch(list.size()) {
        case 0: {
            throw new ResourceProcessorError("br.com.oesp.XMLRender.error.noProduct");
        }
        case 1: {
            return DepartmentInfo.withProduct(list.get(0), externalId);
        }
        case 2: {
            return DepartmentInfo.withProduct(list.get(0), externalId).withDepartment(list.get(1));
        }
        default: {
            return DepartmentInfo.withProduct(list.get(0), externalId).withDepartment(list.get(1)).withSubDepartment(list.get(2));
        }
    }
}

Notice that the first step is repeated for both:

List<String> parents = 
    Objects.requireNonNull(
        Exceptions.trying(
            () -> Arrays.asList(Exceptions.dangerous(resource::getParentIds).expecting(CMException.class).throwing(rpe))
                        .stream()
                        .map(cId -> Exceptions.dangerous(cId, resource.getCMServer()::getPolicy).expecting(CMException.class).throwing(rpe))
                        .filter(policy -> PagePolicy.class.isAssignableFrom(policy.getClass()))
                        .map(PagePolicy.class::cast)

It's not only a problem for reading but specially because I'm redoing a heavy operation twice, meanwhile in a more imperative way I'd do it once.

by romerorsp at July 02, 2015 02:25 PM

/r/emacs

Opening and sorting archives

It's great that emacs can open and edit most archives in a buffer, but is it possible to re-order the files?

I have a tar archive open, and can see there are many functions with a "tar-" prefix but none look like sorters.

Alphabetically would be most useful, but a few more options would be handy too.

submitted by its_never_lupus
[link] [6 comments]

July 02, 2015 02:25 PM

TheoryOverflow

What are some of the most ingenious linear programs developed for tackling hard combinatorial problems? [on hold]

I would like to know about some known ingenious linear programs that have been developed for tackling hard combinatorial optimization problems. Especially any linear programs which had helped in getting good approximation algorithms for long standing open questions related to NP-hard optimization problems.

by user1105 at July 02, 2015 02:23 PM

StackOverflow

Proper way to concatenate variable strings

I need to create new variable from contents of other variables. Currently I'm using something like this:

- command: echo "{{ var1 }}-{{ var2 }}-{{ var3 }}"
  register: newvar

The problem is:

  • Usage of {{ var1 }}...{{ varN }} brings too long strings and very ugly code.
  • Usage of {{ newvar.stdout }} a bit better but confusing.
  • Usage of set_fact module caches fact between runs. It isn't appropriate for me.

Is there any other solution?

by Timofey Stolbov at July 02, 2015 02:22 PM

/r/emacs

evil operator for google-translate.el

There is such an operator, but works for an older version of 'google-translate'. I started from that and hacked a simple definition for the new google-translate.el. Currently, it asks for both languages (source, destination), but I'm sure the code will be easy to improve:

 ;; google-translator operator (defvar text-to-translate "" "Holds the text to be translated.") (defun evil-google-translate--block-line(beg end) "Get current line from the block and append it to the translaton text." (setq text-to-translate (concat text-to-translate " " (buffer-substring-no-properties beg end)))) (evil-define-operator evil-google-translate (beg end type) "Evil operator: translate using *google-translator* package" :move-point nil (interactive "<R>") (setq text-to-translate "") (if (eq type 'block) (evil-apply-on-block 'evil-google-translate--block-line beg end nil) (setq text-to-translate (buffer-substring-no-properties beg end))) (let* ((source-language (google-translate-read-source-language)) (target-language (google-translate-read-target-language))) (google-translate-translate source-language target-language text-to-translate))) ;; use 'gt' as operator key-combo: (define-key evil-normal-state-map "gt" 'evil-google-translate) (define-key evil-motion-state-map "gt" 'evil-google-translate) (define-key evil-visual-state-map "gt" 'evil-google-translate) 

So now, to translate a paragraph, just press gtip from Evil normal state.

EDITED: now, it can even translate a visual block selection :)

submitted by VanLaser
[link] [2 comments]

July 02, 2015 02:16 PM

/r/clojure

QuantOverflow

According to Lo and MacKinlay (1990), momentum profits can be divided in 3 parts. What do they represent exactly?

At first, Lo and MacKinlay (When are Contrarian Profits Due to Stock Market Overreaction?, 1990) didn't do it for momentum precisely. However,Kyung-In Park and Dongcheol Kim (Sources of Momentum Profits in International Stock Markets, 2011) use it to identify and analyse the difference in the momentum effect across different countries. Here is the exact part that I would like to understand:

The purpose of this paper is to examine the differences in the underlying forces determining momentum profits between countries exhibiting and non-exhibiting the momentum phenomenon and to induce which component(s) drives momentum profits. In order to determine the underlying forces of momentum profits, we use the decomposition method of momentum profits by Lo and MacKinlay (1990). They decompose momentum profits into three components; (1) the first-order serial covariance of market returns, (2) the average of first-order serial covariances of all individual assets, and (3) the cross-sectional dispersion in unconditional mean returns of individual assets. The total momentum profit equals minus (1) plus (2) plus (3) [i.e., 􏰀􏰁1􏰂 􏰃 􏰁2􏰂 􏰃 􏰁3􏰂􏰄. That is, the first term contributes negatively, and the second and third terms contribute positively to momentum profits. The first two components reflect the intertemporal behavior [􏰀􏰁1􏰂 􏰃 􏰁2􏰂], and the third component reflects the cross- sectional behavior of asset returns [􏰃􏰁3􏰂].

As you can see, Kim and Park have subsumed the 3 parts into 2 parts: the inter temporal behavior and the cross-sectional behaviour of asset returns.

More precisely, in the conclusion of the article we can find:

Momentum profits can be decomposed into two parts; the part reflecting the cross-sectional difference in unconditional expected returns and the part reflecting the intertemporal behavior of asset returns.

So my question can rewritten like this: what is exactly the part reflecting the cross-sectional difference in unconditional expected returns and the part reflecting the intertemporal behavior of asset returns ?

by Pierre at July 02, 2015 02:08 PM

CompsciOverflow

Combinatorial optimization - is there a formal name for this problem?

I am looking for a formal name and an algorithmic approach to the following problem.

Given is a set of services each coming with a price:

  • {s1, 300}
  • {s2, 400}
  • {s3, 800}

Additionally there is a set of servicepackages each with a price that may differ from the total cost of the individual services:

  • {p1, {s1, s2}, 600}
  • {p2, {s2, s3}, 1050}
  • {p3, {s1, s3}, 950}
  • {p4, {s1, s2, s3}, 1250}

Now given a list of services (each service may appear many times) you want performed find the combination of packages and individual services with the lowest price. You can only use a package if you can completely fill it with services. A package may be ordered many times.

For example: Services to order :{s1,s2,s1,s3,s2,s3,s1}

possible Solutions:

  • {p4, p4, s1} : 2800
  • {p1, p1, p3, s3} : 2950

In this case the first solution would be the winner.

An exhaustive approach will quickly explode because it is O(n!) or even worse.

What is the formal name of this problem (if there is one)? How would you algorithmically approach that problem for hundreds of services and packages?

by Kolophonius at July 02, 2015 02:07 PM

Fefe

Zum Poststreik ging hier gerade folgende Mail (aus ...

Zum Poststreik ging hier gerade folgende Mail (aus Bremen) ein:
Vorhin (gegen 12) an der Uni wurden wir (4er Truppe junger Männer) im Abstand von kaum einer Minute 2x von Leuten in Post-Tshirts angesprochen, ob wir einen Nebenjob wollen.
Kommilitone fragte, wann sein Brief ankommt, Wohnungskündigung etc. Der Post-Typ meinte, dass private Briefe bevorzugt werden, die Post aber dennoch Lagerhallen (!) mietet, um den Kram zwischenzulagern.
Werbung, die per Post zugestellt werden soll, wird gerade in den Müll befördert, weil da kein Platz für ist. xD
Der Streik wird langsam interessant.

Update: Ein paar mehr Details. Bei der Post gibt es drei Kategorien für Briefe. Einmal normale Post, Kategorie "voll bezahlt". Die werden nicht weggeschmissen. Wenn jemand einen Brief an dich adressiert, und dafür normal Porto zahlt, wird der nicht weggeschmissen, auch wenn da "ich bin Werbung" draufsteht. Die zweite Kategorie ist "Infopost", da zahlt man nur die Hälfte. Diese Post ist aber auch noch persönlich adressiert und wird nicht weggeschmissen. Und dann gibt es noch Werbung, also so "Diese Woche bei Lidl"-Dinger, nicht adressiert. Ich denke mal, dass das gemeint ist mit "wird weggeschmissen", und wahrscheinlich auch erst nach Rücksprache mit den Kunden. Lidl hat ja auch nichts davon, wenn die längst abgelaufenen Angebote von vor sechs Wochen jetzt ausgeliefert werden. Oh und dann gibt es noch "an alle Bewohner von $ANSCHRIFT"-Dinger, die fallen m.E. auch nicht unter Werbung im obigen Sinne.

July 02, 2015 02:01 PM

Oh, ach? TPP ist laut Ärzte ohne Grenzen der feuchte ...

Oh, ach? TPP ist laut Ärzte ohne Grenzen der feuchte Traum der Pharmamafia:
“There’s very little distance between what Pharma wants and what the U.S. is demanding,” said Rohit Malpini, director of policy for Doctors Without Borders.
Es geht vor allem um den Zugang zu Generika und Biologics, und TPP (und damit vermutlich auch TTIP) würden den Zugang zu Medikamenten weltweit künstlich verknappen und verteuern.
But Malpani of Doctors Without Borders said U.S. negotiators have basically functioned as drug lobbyists. [...]

“We consider this the worst-ever agreement in terms of access to medicine,” he said. “It would create higher drug prices around the world—and in the U.S., too.

July 02, 2015 02:01 PM

QuantOverflow

pdf of simple equation, compound Poisson noise

I would like to find the probability density function (at stationarity) of the random variable $X_t$, where: \begin{equation*} dX_t = -aX_t dt + d N_t, \end{equation*} $a$ is a constant and $N_t$ is a compound Poisson process with Poisson jump size distribution.

In other words, $X_t$ solves the ordinary differential equation $\frac{d X_t}{dt} + a X_t=0$, but at times $t_i$ say, where the $t_i$ are exponentially distributed with mean $1/k$, $X_t$ increases by an integer drawn from $M\sim Poi(m)$ (i.e. $X_t$ gets a Poisson-distributed "kick" upwards at exponentially distributed intervals).

Is there a way of obtaining the pdf for this random variable $X$? If I have understood things correctly, the Kramers-Moyal equation for the pdf of $X$ is of infinite order because it is a jump Markov process. I have also tried looking at the Master Equation but I get lost. However, I am new to this literature and was wondering if the solution is easy for those in the know, since it is such a simple system.

Many thanks for your help!

by stochastic_newbie at July 02, 2015 01:59 PM

/r/netsec

DragonFly BSD Digest

DragonFly 4.2.1 out

There’s a minor update for DragonFly 4.2 – this covers a problem with i915 support, so it’s worth upgrading if you have an Intel video chipset.

by Justin Sherrill at July 02, 2015 01:29 PM

StackOverflow

Infinite recursion with Shapeless select[U]

I had a neat idea (well, that's debatable, but let's say I had an idea) for making implicit dependency injection easier in Scala. The problem I have is that if you call any methods which require an implicit dependency, you must also decorate the calling method with the same dependency, all the way through until that concrete dependency is finally in scope. My goal was to be able to encode a trait as requiring a group of implicits at the time it's mixed in to a concrete class, so it could go about calling methods that require the implicits, but defer their definition to the implementor.

The obvious way to do this is with some kind of selftype a la this psuedo-scala:

object ThingDoer {
  def getSomething(implicit foo: Foo): Int = ???
}

trait MyTrait { self: Requires[Foo and Bar and Bubba] =>
  //this normally fails to compile unless doThing takes an implicit Foo
  def doThing = ThingDoer.getSomething
}

After a few valiant attempts to actually implement a trait and[A,B] in order to get that nice syntax, I thought it would be smarter to start with shapeless and see if I could even get anywhere with that. I landed on something like this:

import shapeless._, ops.hlist._

trait Requires[L <: HList] {
  def required: L
  implicit def provide[T]:T = required.select[T]
}

object ThingDoer {
  def needsInt(implicit i: Int) = i + 1
}

trait MyTrait { self: Requires[Int :: String :: HNil] =>
  val foo = ThingDoer.needsInt
}

class MyImpl extends MyTrait with Requires[Int :: String :: HNil] {
  def required = 10 :: "Hello" :: HNil
  def showMe = println(foo)
}

I have to say, I was pretty excited when this actually compiled. But, it turns out that when you actually instantiate MyImpl, you get an infinite mutual recursion between MyImpl.provide and Required.provide.

The reason that I think it's due to some mistake I've made with shapeless is that when I step through, it's getting to that select[T] and then steps into HListOps (makes sense, since HListOps is what has the select[T] method) and then seems to bounce back into another call to Requires.provide.

My first thought was that it's attempting to get an implicit Selector[L,T] from provide, since provide doesn't explicitly guard against that. But,

  1. The compiler should have realized that it wasn't going to get a Selector out of provide, and either chosen another candidate or failed to compile.
  2. If I guard provide by requiring that it receive an implicit Selector[L,T] (in which case I could just apply the Selector to get the T) then it doesn't compile anymore due to diverging implicit expansion for type shapeless.ops.hlist.Selector[Int :: String :: HNil], which I don't really know how to go about addressing.

Aside from the fact that my idea is probably misguided to begin with, I'm curious to know how people typically go about debugging these kinds of mysterious, nitty-gritty things. Any pointers?

by Jeremy at July 02, 2015 01:28 PM

how to publish-local from an sbt task (build.scala)

How do you publish a project to the local ivy repository, from within code inside Build.scala, rather than from the sbt command-line? This should perform the same as issuing the publish command on the sbt command-line.

I have a multi-project build definition, and I would like (only) one of the contained projects to get published to the local ivy repository.

by matt at July 02, 2015 01:23 PM

CompsciOverflow

How can it be decidable whether $\pi$ has some sequence of digits?

We were given the following exercise.

Let

$\qquad \displaystyle f(n) = \begin{cases} 1 & 0^n \text{ occurs in the decimal representation of } \pi \\ 0 & \text{else}\end{cases}$

Prove that $f$ is computable.

How is this possible? As far as I know, we do not know wether $\pi$ contains every sequence of digits (or which) and an algorithm can certainly not decide that some sequence is not occurring. Therefore I think $f$ is not computable, because the underlying problem is only semi-decidable.

by Raphael at July 02, 2015 01:21 PM

StackOverflow

How to implement Haskell's splitEvery in Swift?

PROBLEM

let x = (0..<10).splitEvery( 3 )
XCTAssertEqual( x, [(0...2),(3...5),(6...8),(9)], "implementation broken" )

COMMENTS

I am running into problems calculating number of elements in the Range, etc...

extension Range
{
    func splitEvery( nInEach: Int ) -> [Range]
    {
        let n = self.endIndex - self.startIndex // ERROR - cannot invoke '-' with an argument list of type (T,T)
    }
}

by kfmfe04 at July 02, 2015 01:17 PM

CompsciOverflow

Weka class question [on hold]

We are trying to run J48 on a classified data set. Our class attribute has two possible values ( 0,1) when running J48 the tree terminates at the very first node and doesnt process any further.

Instead of considering (0- false) as the starting point of J48. How can we consider running J48 by selecting (1-true) as the starting point of the tree?

Any suggestion will be greatly appreciated.

by user35161 at July 02, 2015 01:13 PM

/r/netsec

Fefe

In Spanien gilt ab jetzt das Anti-Pressefreiheit-Gesetz.Ein ...

In Spanien gilt ab jetzt das Anti-Pressefreiheit-Gesetz.

Ein Polizist findet, du habest ihm nicht ausreichend Respekt entgegen gebracht? 600-30000 € Strafe.

In einer Bar oder im ÖPNV einen Joint rauchen? 600-30000 € Strafe.

Wer versucht, eine Zwangsräumung zu verhindern, ist mit 600-300000 € dabei (eine Null mehr).

Unautorisierte Fotos von der Polizei machen ist auch verboten, genau wie Leute im Internet zu Protesten aufrufen.

Wer öffentliche Auftritte von Politikern oder Sport- oder religiöse Veranstaltungen stört, ist mit 600-300000 € dabei. Und wer an einem unautorisierten Protest in der Nähe des Parlaments teilnimmt, muss sogar 30000-600000 € zahlen.

July 02, 2015 01:01 PM

Wer es vorhin beim Oettinger-Tweet noch nicht mitgeschnitten ...

Wer es vorhin beim Oettinger-Tweet noch nicht mitgeschnitten hat: Die angeblichen Netzneutralitätsregeln, die da gerade von Leuten mit Leseschwäche gefeiert werden, sind eine Volksverarschung erster Güte. Die haben sich schlicht ein "offenes Internet" definiert, wo es Netzneutralität gibt. Und neben dem offenen Internet gibt es dann halt auch noch Netz ohne Neutralität, von den selben ISPs verkauft. Das ist von der Regelung nicht betroffen.

Das ist jetzt also, als würde man als großen Sieg feiern, dass wir keine Waffen mehr nach Saudi-Arabien verkaufen. Außer halt den Waffen, die wir nach Saudi-Arabien verkaufen. Aber da war ja auch nichts zu erwarten.

July 02, 2015 01:01 PM

DataTau

StackOverflow

Consuming and API with Scala Dispatch Same method two different JSON

I 'm consuming a REST API, When the call goes OK, this API return a 200 OK Header, Then in the body it can handle two different JSONs

{"Error": {
   "code" = 1
   "msg" = "some error message"
  }
}

Or if the data send was correct, it returns

{"code" : {
  "status" = "Your submission is ..."
  "msg" = "It is happy"
  "answer" = {...}
}

The problem is that, If I use json4s, I must known which case class to use, what should I do, Use Either[Error,Code], after converting the JValue to String and check if contains Error then Left(Error) o else Right(Code), which solution should I Take. I'm looking for a good solution, and maybe a correct explanation about that.

The problem of my aprrocach is that Dispatch gives me Either[String, JsValue] so finally I will get Either[String,Either[Error,Code]], and it seems not a good object

by anquegi at July 02, 2015 12:55 PM

Terminating a Future and getting the intermediate result

I have a long-running Scala Future that operates as follows:

  1. Calculate initial result
  2. Improve result
  3. If no improvement then terminate, else goto 2

After receiving an external signal (meaning the Future won't have any a priori knowledge about how long it's supposed to run for), I would like to be able to tell the Future to terminate and give me its intermediate result. I can do this using some sort of side channel (note: this is a Java program using Akka, hence the reason I'm creating a Scala future in Java along with all of the attendant boilerplate):

public void doCalculation(AtomicBoolean interrupt, AtomicReference output) {
    Futures.future(new Callable<Boolean>() {
        public Boolean call() {
            Object previous = // calculate initial value
            output.set(previous);
            while(!interrupt.get()) {
                Object next = // calculate next value
                if(/* next is better than previous */) {
                    previous = next;
                    output.set(previous);
                } else return true;
            }
            return false;
        }
    }, TypedActor.dispatcher());
}

This way whoever is calling doCalculation can get intermediate values via output and can terminate the Future via interrupt. However, I'm wondering if there is a way to do this without resorting to side channels, as this is going to make it somewhat difficult for somebody else to maintain my code. We're using Java 7.

by Zim-Zam O'Pootertoot at July 02, 2015 12:54 PM

How to implement parametric lenses that change type of state

So in scala we have the typical Lens signature as:

case class Lens[O,V](get: O => V, set: (O,V) => O)

But as you can see, it only updates and sets values of the same type, it does not set one type for another. What I have in mind is something more like this:

case class Lens[O[_],A,B](get: O[A] => A, set: (O[A],B) => O[B])

With A and B make sense for O[_]My question is. Does this stop being isomorphic? Is there a simpler way without breaking some rules?

by user1553111 at July 02, 2015 12:42 PM

write an RDD into HDFS in a spark-streaming context

I have a spark streaming environment with spark 1.2.0 where i retrieve data from a local folder and every time I find a new file added to the folder I perform some transformation.

val ssc = new StreamingContext(sc, Seconds(10))
val data = ssc.textFileStream(directory)

In order to perform my analysis on DStream data I have to transform it into an Array

var arr = new ArrayBuffer[String]();
   data.foreachRDD {
   arr ++= _.collect()
}

Then I use data obtained to extract the information I want and to save them on HDFS.

val myRDD  = sc.parallelize(arr)
myRDD.saveAsTextFile("hdfs directory....")

Since I really need to manipulate data with an Array it's impossible to save data on HDFS with DStream.saveAsTextFiles("...") (which would work fine) and I have to save the RDD but with this preocedure I finally have empty output files named part-00000 etc...

With an arr.foreach(println) I am able to see the correct results of the transofmations.

My suspect is that spark tries at every batch to write data in the same files, deleting what was previously written. I tried to save in a dynamic named folder like myRDD.saveAsTextFile("folder" + System.currentTimeMillis().toString()) but always only one foldes is created and the output files are still empty.

How can I write an RDD into HDFS in a spark-streaming context?

by dr_stein at July 02, 2015 12:37 PM

Creating typed collection

II am trying to understand Scala collections by adding a new collection as follows:

class NewColl[V](values:Vector[V],someOtherParams)
extends IndexedSeq[V] with IndexedSeqLike[V, NewColl[V]] {

  def fromSeq[V](seq: Seq[V]): NewColl[V] = ...

  override def newBuilder[V]: Builder[V, NewColl[V]] =
    new ArrayBuffer[V] mapResult fromSeq[V]
}

but I get the following error:

overriding method newBuilder in trait TraversableLike
   of type => scala.collection.mutable.Builder[V,NewColl[V]];
method newBuilder in trait GenericTraversableTemplate
   of type => scala.collection.mutable.Builder[V,IndexedSeq[V]] has incompatible type

Any Idea?

by teucer at July 02, 2015 12:35 PM

Parsing a CSV string while ignoring commas inside the individual columns

I am trying to split a csv string with comma as delimiter.

val string ="A,B,"Hi,There",C,D"

I cannot use string.split(",") because it will split "Hi,There" as two different columns. Can I use regex to solve this? I came around scala-csv parser which I dont want to use. I hope there is a better method to solve this problem.I know this is not a trivial problem. It'll be helpful if people can share their approaches to solve this problem.

by COSTA at July 02, 2015 12:25 PM

Fefe

In Afrika geht es übrigens aufwärts. Man darf natürlich ...

In Afrika geht es übrigens aufwärts. Man darf natürlich keine Wunder erwarten. Aber es ist nicht so, dass das da alles nur vor sich hin stagniert. Es gibt Anlass zur Hoffnung.

July 02, 2015 12:01 PM

DataTau

StackOverflow

Maven Scala Projects with IntelliJ

I have an IntelliJ IDEA project contains two Scala modules M1 and M2. Each of those modules contain a single Scala class C1 and C2 respectively. In addition, class C2 imports class C1.

All went well until I've added Maven module support for both modules.

The first step was adding Maven framework support for M1 (the "depend-on" module). I was able to compile the project after that.

The second step was adding framework support for M2 (the dependent module).

Immediately after I did that, C2 was unable to resolve C1 anymore even though the paths seems OK and M1 appeared in M2’s dependancies list.

Of course I’ve also added a framework support for the project itself.

Any idea what is going on there?

by user49204 at July 02, 2015 11:59 AM

Planet Theory

Goodbye SIGACT and CRA

Tuesday I served my last day on two organizations, the ACM SIGACT Executive Committee and the CRA Board of Directors.

I spent ten years on the SIGACT (Special Interest Group on Algorithms and Computation Theory) EC, four years as vice-chair, three years as chair and three years as ex-chair, admittedly not so active those last three years. SIGACT is the main US academic organization for theoretical computer science and organizes STOC as its flagship conference. I tried to do big things, managed a few smaller things (ToCT, a few more accepted papers in STOC, poster sessions, workshops, moving Knuth and Distinguished Service to annual awards, an award for best student presentation, a tiered PC), some of them stuck and some of them didn't. Glad to see a new movement to try big changes to meet the main challenge that no conference, including STOC, really brings the theory community together anymore. As Michael Mitzenmacher becomes chair and Paul Beame takes my place as ex-chair, I wish them them and SIGACT well moving forward.

The Computing Research Association's main efforts promotes computing research to industry and government and increasing the diversity in computing research. It's a well-run organization and we can thank them particularly for helping improve the funding situation for computing in difficult financial times. The CRA occasionally puts out best practices memos like a recent one recommending quality over quantity for hiring and promotion. Serving on the board, I most enjoyed interacting with computer scientists from across the entire field, instead of just hanging with theorists at the usual conferences and workshops.

One advantage of leaving these committees: I can now kibbitz more freely on the theory community and computing in general. Should be fun.

by Lance Fortnow (noreply@blogger.com) at July 02, 2015 11:56 AM

StackOverflow

Issues with building scala in Intellij (14.1)

I have created a project in IntelliJ, which consists of a couple of scala modules (maven build). Intellij version 14.1.4, Scala plugin 1.5.2

The compilation and build worked fine until today. Now I get an error, which looks like Intellij does not find the compiler anymore

Error:scalac: 
 while compiling: /home..//test.scala
    during phase: global=terminal, atPhase=xsbt-analyzer
 library version: version 2.10.4
compiler version: version 2.10.4

....

uncaught exception during compilation: java.io.IOException
Error:scalac: No such file or directory

When I execute the maven build directly, all is fine. I've also tried to include scalac in $Path. Additionally I've changed and downloaded the SDKs in the project settings but with no effect.

Is this a bug ?

I've seen a similar issue mentioned here: https://github.com/NetLogo/NetLogo/wiki/Building-with-IntelliJ but the described workaround does not help.

by Hawk66 at July 02, 2015 11:55 AM

Json reader with different type in an array

I would like to write a Json reader for such Json

    {
        "headers": [
            {
                "id": "time:monthly",
                "type": "a"
            },
            {
                "id": "Value1",
                "type": "b"
            },
            {
                "id": "Value2",
                "type": "b"
            }
        ],
        "rows": [
            [
                "2013-01",
                4,
                5
            ],
            [
                "2013-02",
                3,
                6
            ]
        ]
    }

I know (thanks to the header) that in the elements of rows the first element is of a type a, the second and the third will be of type b. My goal is to create an object row (List[a],List[b]) ( the number of element of type a and b varies that's why I use List). My question is how can I parse rows or how can I read a Json array with different type of object and without an id ?

by Chedly Brgb at July 02, 2015 11:42 AM

QuantOverflow

Speed of mean reversion of an interest rate model

I would like to have a bit more of intuition about the concept of "speed of mean reversion" for an interest rate model, e.g. Vasicek or CIR. In particular, is a negative speed of mean reversion possible? What's the connection between a mean reverting process and an AR(1) process? Does explosive AR(1) imply negative speed of mean reversion?

by Egodym at July 02, 2015 11:12 AM

Fefe

Die CDU will jetzt einen Rechtsanspruch auf 50 Mbps-Internet ...

Die CDU will jetzt einen Rechtsanspruch auf 50 Mbps-Internet schaffen. Die CDU. Nein, wirklich! Die CDU! Die, die seit Menschengedenken erfolgreich verhindert, dass es hier FTTH gibt wie anderswo. DIE! DIE haben jetzt die Stirn, DAS vorzuschlagen! Was für eine bodenlose Frechheit!

July 02, 2015 11:01 AM

Halfbakery

/r/netsec

StackOverflow

Highlight arguments in function body in vim

A little something that could be borrowed from IDEs. So the idea would be to highlight function arguments (and maybe scoped variable names) inside function bodies. This is the default behaviour for some C:

plain vim highlighting

Well, if I were to place the cursor inside func I would like to see the arguments foo and bar highlighted to follow the algorithm logic better. Notice that the similarly named foo in func2 wouldn't get highlit. This luxury could be omitted though...

func hilighted

Using locally scoped variables, I would also like have locally initialized variables highlit:

highlight <code>i</code> inside for

Finally to redemonstrate the luxury:

luxury again

Not so trivial to write this. I used the C to give a general idea. Really I could use this for Scheme/Clojure programming better:

for clojure too inside let construct

This should recognize let, loop, for, doseq bindings for instance.

My vimscript-fu isn't that strong; I suspect we would need to

  • Parse (non-regexply?) the arguments from the function definition under the cursor. This would be language specific of course. My priority would be Clojure.
  • define a syntax region to cover the given function/scope only
  • give the required syntax matches

As a function this could be mapped to a key (if very resource intensive) or CursorMoved if not so slow.

Okay, now. Has anyone written/found something like this? Do the vimscript gurus have an idea on how to actually start writing such a script?

Sorry about slight offtopicness and bad formatting. Feel free to edit/format. Or vote to close.

by progo at July 02, 2015 10:51 AM

How to obtain current user metadata in Clojure Friend?

I am using Friend library. I need to provide a request handler that will return user roles (and possibly some other metadata) for currently logged in user.

I've added a simple request handler:

  (GET "/userInfo.do" request
      (friend/identity request))

But this basically returns nil. What is the proper way of fetching user session data?

by siphiuel at July 02, 2015 10:50 AM

QuantOverflow

Total market cap in country, and average p/e per country and continent(europe)

I want to invest my monthly saving on index fund. However, I am not sure to pick which country index fund that I want to invest. I am afraid i am investing in an index fund that is so overvalued goes over the roof and haven't manage to come back after such a long time like nikkei 1989 or 1929 or Nasdaq in 2000 maybe?

I just read the article about Warren Buffett indicator that says something like the ratio of gdp compare to the total market cap per country helps to define whether the stock market is overvalued or not for that country. I also hear a lot that the average p/e ratio of the market helps to define whether the market is overvalue or not.

In my place, I have an access to invest with an index fund in the following market: China, Hongkong , SIngapore, US, Europe, taiwan, SOuth Korea, Japan, Australia, India, Russia, Brazil.

My question is, where can I get the historical total market cap, GDP and average p/e of the market for those countries above? This really helps me to allocate my saving and prevent investing my money in an overvalued market. Thanks heaps.

by jay at July 02, 2015 10:44 AM

/r/osdev

Are there reasons the suspend/hibernate operations should not or cannot be completely transparent to user space?

This message by mpv (presumably information received via libasound, the user space part of ALSA) raised this question:

[ao/alsa] PCM in suspend mode, trying to resume. 

I'm thinking the likely only case is if the device doesn't support the operation?

submitted by seekingsofia
[link] [4 comments]

July 02, 2015 10:35 AM

StackOverflow

InfluxDB Cannot see databases from localhost:8083 + Cannot access Command Line Interface

Please feel free to redirect me to any other place if this isn't the right one for this question.

Problem: When I log to the administration panel : "localhost:8083" with "root" "root" I cannot see the existing databases nor the data in it. Also, I have no way to access InfluxDB from the command line.

Also the line "sudo /etc/init.d/influxdb start" does not work for my setup. I have to go into /etc/init.d/ and run "sudo ./influxdb start -config=config.toml" in order to get the server running.

I've installed influxDB v0.8 from https://influxdb.com/docs/v0.8/introduction/installation.html for Ubuntu 14.04.

I've been developing a Clojure program using the Capacitor API just to get started and interact with InfluxDB. It runs well, I can create delete, insert and query a database without problems.

"netstat -anp | grep LISTEN" confirms me that ports 8083 8086 8090 and 8099 are listening.

I've been Googling all around but cannot manage to get a solution.

Thanks for the support and enjoy building things !

by Hichame at July 02, 2015 10:32 AM

What is transducer in functional language? [duplicate]

This question already has an answer here:

I notice clojure 1.7 introducing a new feature called transducer.I read the document transducer.But it not easily to understand why it is needed?Is there some simple code to explain how to use transducer and how to solve some problem using it?

by user2219372 at July 02, 2015 10:16 AM

CompsciOverflow

Data structure for ordered counted set

Is there a name for a counted set (multiset) that is ordered?

For example lets say this data structure represents a shopping cart (or basket if you're British). The shopping cart shows the order the items were added to the cart unless an item of the same type was already added in which case a number associated with item is incremented.

Is this a well studied data structure or just a specialized multiset? I imagine it could be represented with an internal data structure of a dictionary and an array.

by Steve Moser at July 02, 2015 10:14 AM

Fred Wilson

Maybe They Do Understand Your Business

Farhad Manjoo has a piece in the NY Times discussing something we’ve been talking about ad nauseam here at AVC in the past year or two, namely that venture backed tech companies are waiting much longer to go public and in the process creating a “private IPO” market which in turn is increasingly putting huge valuations on a large number of venture backed companies, including a bunch of USV portfolio companies.

There is an unfortunate quote in Farhad’s post which suggests that the public markets are clueless:

If you can get $200 million from private sources, then yeah, I don’t want my company under the scrutiny of the unwashed masses who don’t understand my business

First, the public markets are not “unwashed masses.” They are full of very sophisticated investors who, I suspect, do understand these businesses very well.

It is true that Wall Street will not be tolerant of missed expectations. It is true that Wall Street may focus too much on short term numbers. It is true that you may not be able to control what numbers Wall Street decides to obsess over when it comes to valuing your company.

But I think tech sector is making a huge mistake in thinking that they know their companies and how to value them better than Wall Street. That kind of thinking is arrogance and pride comes before the fall.

by Fred Wilson at July 02, 2015 10:08 AM

StackOverflow

Replace field values of a nested object dynamically

I am trying to write integration test for my scala application(with akka-http). I am running into a problem, for which I am not able to find a solution.

My Case classes are as below:

case class Employee(id:Long, name:String, departmentId:Long, createdDate:Timestamp) extends BaseEntity
case class EmployeeContainer(employee:Employee, department:Department)  extends BaseEntity

I have a method like this

trait BaseTrait[E<:BaseEntity, C <: BaseEntity]{
    def getById(id:Long): Future[List[C]] = {
       //query from db and return result.
    }

    def save(obj:E) = {
      //set the createDate field to the current timestamp
      //insert into database
    }

}

I can extend my class with BaseTrait and just override the getById() method. Rest of the layers are provided by our internal framework.

class MyDao extends BaseTrait[Employee, EmployeeContainer] {
  override def getById(id:Long) = {
      for {
      val emp <- getFromDb(id)
      val dept <- DeptDao.getFromDb(emp.departmentId)
      val container = EmployeeContainer(emp,dept)
      } yield(container)
   }
}

So in the rest layer, I will be getting the response as the EmployeeContainer. The problem now I am facing is that, the modified date is automaticaally updated with the current timestamp. So, when I get back the result, the timestamp in the object I passed to save() method will be overwritten with the current time. When I write the test case, I need to have an object to compare to. But the timestamp of that object and the one I get abck will never be the same.

Is there anyway, in which I can replace all the occurrance of createDate with a known value of timestamp so that I can compare it in my testcase? The main problem is that I can not predict the structure of the container (it can have multiple case classes(nested or flat) with or without createDate fields).

I was able to replace the field using reflection if it comes in the main case class, but unable to do for nested structures.

by Yadu Krishnan at July 02, 2015 10:04 AM

define project specific tasks in leiningen

Is there a way to define rake like tasks within a project for leiningen.

I want to define a custom task in leiningen project.clj which will invoke a function in my project namespace

by user1896766 at July 02, 2015 10:04 AM

TheoryOverflow

Hierarchy theorem for NTIME intersect coNTIME?

$\newcommand{\cc}[1]{\mathsf{#1}}$Does a theorem along the following lines hold: If $g(n)$ is a little bigger than $f(n)$, then $\cc{NTIME}(g) \cap \cc{coNTIME}(g) \neq \cc{NTIME}(f) \cap \cc{coNTIME}(f)$?

It's easy to show that $\cc{NP} \cap \cc{coNP} \neq \cc{NEXP} \cap \cc{coNEXP}$, at least. Proof: Assume not. Then $$\cc{NEXP} \cap \cc{coNEXP} \subseteq \cc{NP} \cap \cc{coNP} \subseteq \cc{NP} \cup \cc{coNP} \subseteq \cc{NEXP} \cap \cc{coNEXP},$$ so $\cc{NP} = \cc{coNP}$, and hence (by padding) $\cc{NEXP} = \cc{coNEXP}$. But then our assumption implies that $\cc{NP} = \cc{NEXP}$, contradicting the nondeterministic time hierarchy theorem. QED.

But I don't even see how to separate $\cc{NP} \cap \cc{coNP}$ from $\cc{NSUBEXP} \cap \cc{coNSUBEXP}$, as diagonalization seems tricky in this setting.

by William Hoza at July 02, 2015 09:53 AM

StackOverflow

How to test properties of random generator

Using Scala, I've a method that return a set of 5 random numbers, that should be between 1 and a constant LIMIT.

What's the best approach to test that a answer will never return more/less than 5 elements, and all elements are between 1 and LIMIT? Making a simple test is easy. But should I make a loop of, lets say, 1000 iterations to better test it? Or there is some feature in unit testing for such cases?

Using Scala and ScalaTest.FunSuite

by pedrorijo91 at July 02, 2015 09:40 AM

QuantOverflow

volatility factor

I am trying to add a volatility factor to Fama-French factor model. Does anybody know of a source where I can get data for "volatility mimicking factor" or suggest a simple methodology for calculation. (Ang, 2006) introduces a FVIX factor and points that change in VIX cannot be used directly. I am open to trying other volatility factors if someone has a suggestion

by Rohit Arora at July 02, 2015 09:40 AM

Long-Term Government Bond Yields

On the Federal Reserve of St. Louis FRED website we can find the 10-year government bond yields: https://research.stlouisfed.org/fred2/data/IRLTLT01USM156N.txt.

I chose monthly frequency and percent units. I'm wondering if the yields are quoted annually even though they have a monthly frequency.

by Egodym at July 02, 2015 09:36 AM

StackOverflow

Elegant way handling both missing key and null values from Scala Map

I understand

  • in Scala use of null should be avoided
  • and Map.get will return a Option[B] and I can use .getOrElse to get the value and fallback to a default value

e.g.

map.getOrElse("key1","default")

Meanwhile I am interacting with a Java library, which some values are null.

e.g. Map("key1"->null)

getOrElse will throw null pointer in this case.

I want to handle both cases and result in writing something like this

  def getOrElseNoNull[A,B](map:Map[A,B],key:A,default:B) = {
    map.get(key) match{
      case Some(x) if x != null => x
      case _ => default
    }
  }

which is quite ugly. (it is Map[Any] and I need a string from that key)

getOrElseNoNull(map,"key1","").asInstanceOf[String])

is it possible to use implicit to extend the map, or any other elegant way?

by vincentlcy at July 02, 2015 09:26 AM

How to set Content-Type as application/json in httpkit

I use httpkit as http client. I try many solutions to make the header Content-Type be application/json, but all failed.

Here is my code:

(require '[org.httpkit.client :as http])

(http/post 
  url
  { :query-params {"q" "foo, bar"}
    :form-params {"q" "foo, bar"}
    :headers {"Content-Type" "application/json; charset=utf-8"}})

Post with the code above would get response status 200 but Content-Type is application/x-www-form-urlencoded.

And if delete the line application/x-www-form-urlencoded, would get response status 400.

PS: I take flask as web server.

by keroro520 at July 02, 2015 09:25 AM

"IO error while decoding Routes.scala with UTF-8" when compiling Play Framework project

When I compile my project, the console will show:

[error] IO error while decoding Routes.scala with UTF-8,Please try specifying another one using the -encoding option"

What might be the reason for this error?

by RyuGou at July 02, 2015 09:21 AM

CompsciOverflow

Given an array of scores that a team can score in one game and the final team score. Return a list of all possible intermediate scores [on hold]

You are given an array of scores that a team can score in one game and the final team score. Write the code that returns a list of all possible intermediate scores.

E.g., the input array is the following: [1, 4, 5, 3]. So the team can score at most 1 in the first game, 4 in the second game, and so on.

How to solve the problem? The only one solution I have in mind is some kind of a brute-force approach.

by Maksim Dmitriev at July 02, 2015 09:19 AM

How do binary trees use memory to store its data?

So I know that arrays use a block on contiguous memory addresses to store data to memory, and lists (not linked lists) make use of static arrays and when data is appended to the list, if there is no room, a new static array is created elsewhere in memory larger in size so more data can be stored.

My question is, how do binary trees use memory to store data? Is each node a memory location which points to two other memory locations elsewhere...not necessarily contiguous locations? Or are they stored in contiguous blocks of memory too like a static array or dynamic list?

by sw123456 at July 02, 2015 09:17 AM

QuantOverflow

Can you tell me what this RBloomberg formula means?

I've been asked to re-create a spreadsheet that used RBloomberg using a different data source. But I'm having trouble figuring out exactly what one of the spreadsheet's formulas does. Can anyone tell me what exactly the following formula is calculating?

I think it's something along the lines of (CurrentPrice/30dayEMA-1)*100. But I can't seem to get that calculation to match the values given by this formula. I wonder if this is caused by differences in things like whether this formula is using adjusted prices or not, when the EMA calculation starts, etc.

Thanks!

=IF($A19<>"",(BDH($A19,"PX_LAST",$B$7,$B$7,"Days=A", "Fill=P", "Dts=H")/BTH($A19,"EMAVG",$B$7,$B$7,"EMAVG","TAPeriod=30","DSClose=PX_LAST","Dir=V","Dts=H","Sort=A","QtTyp=P","Days=T","Per=cd","UseDPDF=N", "CshAdjNormal=Y","CshAdjAbnormal=Y","CapChg=Y")-1)*100,"")

(A19 is a security name.B7 is the current date.)

by ransomedbyfire at July 02, 2015 09:05 AM

StackOverflow

Play Framework how to set jdbc properties after startup

In my play application the database settings are not known before startup of the application. I have to read them from an environment variable after automatic deployment and start of the application.

The platform the app is deployed on is cloudfoundry. And there is a environment variable called VCAP_SERVICES (that is a json string). Here are all services listed e.g. the database service including the credentials

Is there a prefered way to do so? In means of still being able to use stuff like:

DataSource ds = DB.getDatasource();

by Subby at July 02, 2015 09:05 AM

Fefe

Planet Clojure

How to Set Up a Clojure Environment with Ansible

This article is brought with ❤ to you by Semaphore.

Introduction

In this tutorial, we will walk through the process of setting up a local environment for Clojure development using the Ansible configuration management tool. Ansible and Clojure are are a perfect fit, as both place an emphasis on simplicity and building with small, focused components. While we will not cover the Clojure development process specifically, we will learn how to set up an environment that can be used across a whole host of Clojure projects.

Additionally, we will see how using a tool like Ansible will help us implement the Twelve-Factor App methodology. The Twelve-Factor App is a collection of best practices collected by the designers of the Heroku platform.

By the end of this tutorial, you will have a virtualized environment for running a Clojure web application, as well as all supporting software, including MongoDB and RabbitMQ. You will be able to quickly get up and running on multiple development machines. You can even use the same Ansible configuration to provision a production server.

Prerequisites

For the purposes of this tutorial, we will be using Vagrant to spin up several VMs for our development environment. If you have not used Vagrant before, please make sure to install the following packages for your OS before continuing:

Additionally, you will want to have Leiningen installed locally.

Finally, this tutorial uses an existing chat application as an example, so go ahead and clone the repo with this command:

git clone https://github.com/kendru/clojure-chat.git

Unless otherwise specified, the shell commands in the rest of this tutorial should be run from the directory into which you have cloned the repository.

What Makes Ansible Different

With mature configuration management products such as Puppet and Chef available, why would you choose Ansible? It is something of a newcomer to the field, but it has gained a lot of traction since its initial release in 2012. The three things that set Ansible apart are that it is agentless, it uses data-driven configuration, and it is also good for task automation.

Agentless

Unlike Puppet and Chef, Ansible does not require any client software to be installed on the machines that it manages. It only requires Python and SSH, which are included out of the box on every Linux server.

The agentless model has a couple of advantages. First, it is dead simple. The only machine that needs anything installed on is is the one that you will run Ansible from. Additionally, not having any client software installed on the machines you manage means that there are fewer components in your infrastructure that you need to worry about failing.

The simplicity of Ansible's agentless model is a good fit in the Clojure community.

Data-Driven Configuration

Unlike Puppet and Chef, which specify configuration using a programming language, Ansible keeps all configuration in YAML files. At first, this may sound like a drawback, but as we will see later, keeping configuration as data makes for much cleaner, less error-prone codebase.

Once again, Clojure programmers are likely to see the value of using data as the interface to an application. Data is easy to understand, can be manipulated programmatically, and working with it does not require learning a new programming language.

When you adopt Chef, you need to know Ruby. With Puppet, you need to learn the Puppet configuration language. With Ansible, you just need to know how YAML works (if you don't already, you can learn the syntax in about 5 minutes).

Task-Based Automation

In addition to system configuration, Ansible excels at automating repetitive tasks that may need to be run across a number of machines, such as deployments. An Ansible configuration file (called a playbook) is read and executed from top to bottom, as a shell script would be. This allows us to describe a procedure, and then run it on any number of host machines.

For example, you may use something like the following playbook to deploy a standalone Java application that relies on the OS's process manager, such as Upstart or systemd.

---
# deploy.yml
# Deploy a new version of "myapp"
#
# Usage: ansible-playbook deploy.yml --extra-vars "version=1.2.345"

- hosts: appservers
  sudo: yes
  tasks:
    - name: Download new version from S3
      s3: bucket=acme-releases object=/myapp/{{ version }}.jar dest=/opt/bin/myapp/{{ version }}.jar mode=get

   - name: Move symlink to point to new version
     file: src=/opt/bin/myapp/{{ version }}.jar dest=/opt/bin/myapp/deployed.jar state=link force=yes
     notify: Restart myapp

  handlers:
    - name: Restart myapp
      action: service name=myapp state=restarted

This example playbook downloads a package from Amazon S3, creates a symlink, and restarts the system service that runs your application. From this simple playbook, you could deploy your application to dozens — or even hundreds — of machines.

Installing Ansible

If you have not already installed Vagrant and Leiningen, please do so now. The following steps require that both are present on your local machine. We also assume that you already have Python installed. If you are running any flavor of Linux or OSX, you should have Python.

Installing Ansible is a straightforward process. Check out the Ansible docs to see if there is a pre-packaged download available for your OS. Otherwise, you can install with Python's package manager, pip.

sudo easy_install pip
sudo pip install ansible

Now, let's verify that the install was successful:

$ ansible --version
ansible 1.9.1
  configured module search path = None

Provisioning Vagrant with Ansible

Now that all dependencies are installed, it's time to get our Clojure environment set up.

The master branch for this tutorial's git repository contains a completed version of all configuration. If you would like to follow along and build out the playbooks yourself, you can check out the not-provisioned tag:

git checkout -b follow-along tags/not-provisioned

At this point, we want to instruct Vagrant to provision our virtual environment with Ansible. One of the key concepts in Ansible is that of an inventory, which contains named groups of host names or IP addresses so that Ansible can configure these hosts by group name. Thankfully, Vagrant will automatically generate an inventory for us. We just need to specify how to group the VMs by adding the following to our Vagrantfile:

config.vm.provision "ansible" do |ansible|
  ansible.groups = {
    "application" => ["app"],
    "database" => ["infr"],
    "broker" => ["infr"],
    "common:children" => ["application", "database", "broker"]
  }
end

This creates 4 groups of servers, each with a single virtual machine. Notice that both database and broker groups have the same server (infr). This will cause all configuration for both groups to be applied the the same VM.

While we could start up our Vagrant environment now, Ansible would have nothing to do. Let's fix that by writing some plays to provision our environment.

Writing Ansible Plays

Before we dig into the plays that we need for our application dependencies, let's write a simple task to place a message of the day (motd) on each of the servers that will be displayed when the user logs in. We will be using a role-based layout for our Ansible configuration, so let's create a common role and add our config. Your directory structure should look something like the following:

Vagrantfile
...
config/
└── roles
    └── common
        ├── tasks
        └── templates

Next, we'll add a main.yml file to the tasks directory that will define the motd task.

---
# config/roles/common/tasks/main.yml
# Tasks common to all hosts

- name: Install motd
  template: src=motd.j2 dest=/etc/motd owner=root group=root mode=0644

Briefly, this file defines a single task that uses the template module built into Ansible to take a file from this role's templates directory and copy it onto some remote machine, replacing the template variables with data from Ansible.

Along with this task, we'll create the motd.j2 template.

# config/roles/common/templates/motd.j2
Welcome to {{ ansible_hostname }}
This message was installed with Ansible

When Ansible copies this file to each host, it will replace {{ ansible_hostname }} with the DNS host name of the machine that it is installed on. There are quite a few variables that are available to all templates, and you can additionally define your own on a global, per-host, or per-group basis. The official documentation has very complete coverage of the use of variables.

Finally, we need to create the playbook that will apply the task that we just wrote to each of our servers.

---
# config/provision.yml
# Provision development environment

- name: Apply common configuration
  hosts: all
  sudo: yes
  roles:
    - common

In order for Vagrant to use this playbook, we need to add the following line to our Vagrant file in the same block as the Ansible group configuration that we created earlier:

ansible.playbook = "config/provision.yml"

We can now provision our machines. If you have not yet run vagrant up, running that command will download and initialize VirtualBox VMs and provision them with Ansible (this will take a while on the first run). After we run vagrant up initially, we can re-provision the machines with:

$ vagrant provision
# ...
==> app: Running provisioner: ansible...

PLAY [Apply common configuration] *********************************************

GATHERING FACTS ***************************************************************
ok: [app]

TASK: [common | Install motd] ******************************
changed: [app]

PLAY RECAP ********************************************************************
app                        : ok=2    changed=1    unreachable=0    failed=0

If all was successful, you should see output similar to the above for each of the VMs in our environment.

Ansible Play for Application Server

In our infrastructure, the application server will be dedicated to running only the Clojure application itself. The only dependencies for this server are Java and Leiningen, the Clojure build tool. On a production machine, we would probably not install Leiningen, but it will be helpful for us to build and test our application on the VM.

Let's go ahead and create two separate roles called "java" and "lein".

mkdir -p config/roles/{java,lein}/tasks
cat <<EOF | tee config/roles/java/tasks/main.yml config/roles/lein/tasks/main.yml
---
# TODO

EOF

Next, let's add these roles to a play at the end of our playbook.

# config/provision.yml
- name: Set up app server
  hosts:
    - application
  sudo: yes
  roles:
    - java
    - lein

For the purpose of our application, we would like to install the Oracle Java 8 JDK, which is not available from the standard Ubuntu repositories, so we will add a repository from WebUpd8 and use debconf to automatically accept the Oracle Java license, which is normally an interactive process. Thankfully, there are already Ansible modules for adding apt repositories as well as changing debconf settings. See why they say that Ansible has "batteries included"?

---
# config/roles/java/tasks/main.yml
# Install Oracle Java 8

- name: Add WebUpd8 apt repo
  apt_repository: repo='ppa:webupd8team/java'

- name: Accept Java license
  debconf: name=oracle-java8-installer question='shared/accepted-oracle-license-v1-1' value=true vtype=select

- name: Update apt cache
  apt: update_cache=yes

- name: Install Java 8
  apt: name=oracle-java8-installer state=latest

- name: Set Java environment variables
  apt: name=oracle-java8-set-default state=latest

Next, we'll add the task to install Leiningen. Instead of having Ansible download Leiningen directly from the internet, we will download a copy and make it part of our configuration so that we can easily version it:

mkdir config/roles/lein/files
wget https://raw.githubusercontent.com/technomancy/leiningen/stable/bin/lein -O config/roles/lein/files/lein

With that done, the actual task for installing Leinengen becomes a one-liner.

---
# config/roles/lein/tasks/main.yml
# Install leiningen

- name: Copy lein script
  copy: src=lein dest=/usr/bin/lein owner=root group=root mode=755

Let's make sure that everything is working:

$ vagrant provision
# ... lots of output
PLAY RECAP ********************************************************************
app                        : ok=9    changed=6    unreachable=0    failed=0

Ansible Plays for Infrastructure Server

Next up, we'll add the play that will set up our infr server with MongoDB and RabbitMQ. We'll create roles for each of these applications, and we'll create plays to apply the mongodb role to servers in the database group, and the rabbitmq role to the servers in the broker group. If you recall, we only have the infr VM in each of those groups, so both roles will be applied to that same server.

We'll set up the role skeletons similar to the way we did with the java and lein roles.

mkdir -p config/roles/{mongodb,rabbitmq}/{tasks,handlers}
cat <<EOF | tee config/roles/mongodb/tasks/main.yml config/roles/rabbitmq/tasks/main.yml
---
# TODO

EOF

This time, we'll add two separate plays to our playbook.

# config/provision.yml
- name: Set up database server
  hosts:
    - database
  sudo: yes
  roles:
    - mongodb

- name: Set up messaging broker server
  hosts:
    - broker
  sudo: yes
  roles:
    - rabbitmq

Next, we'll fill in the main tasks to install MongoDB and RabbitMQ.

---
# config/roles/mongodb/tasks/main.yml
# Install and configure MongoDB

- name: Fetch apt signing key
  apt_key: keyserver=keyserver.ubuntu.com id=7F0CEB10 state=present

- name: Add 10gen Mongo repo
  apt_repository: >
    repo='deb http://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.0 multiverse'
    state=present

- name: Update apt cache
  apt: update_cache=yes

- name: Install MongoDB
  apt: name=mongodb-org state=latest

- name: Start mongod
  service: name=mongod state=started

# Allow connections to mongo from other hosts
- name: Bind mongo to IP
  lineinfile: >
    dest=/etc/mongod.conf
    regexp="^bind_ip ="
    line="bind_ip = {{ mongo_bind_ip }}"
  notify:
    - restart mongod

- name: Install pip (for adding mongo user)
  apt: name=python-pip state=latest

- name: Install pymongo (for adding mongo user)
  pip: name=pymongo state=latest

- name: Add mongo user
  mongodb_user: >
    database={{ mongo_database }}
    name={{ mongo_user }}
    password={{ mongo_password }}
    state=present

This role is a little more complicated than what we have seen so far, but it's still not too bad. We again use built-in Ansible modules to fetch 10Gen's signing key and add their MongoDB repository. We use the service module to start the mongod system service, then we use the lineinfile module to edit a single line in the default MongoDB config file.

You may have noticed that we used several variables in the arguments to a couple of tasks. There are a couple of places that variables can live, but for this tutorial, we will declare these variables globally in config/group_vars/all.yml. The variables in this file will be applied to every group of servers.

mkdir config/group_vars
cat <<EOF > config/group_vars/all.yml
---
# config/group_vars/all.yml
# Variables common to all groups

mongo_bind_ip: 0.0.0.0
mongo_database: clojure-chat
mongo_user: clojure-chat
mongo_password: s3cr3t
EOF

Additionally, we added a notify line that will notify a handler when the task has been run. Handlers are generally used to start and stop system services. Let's go ahead and create the handler now.

cat <<EOF > config/roles/mongodb/handlers/main.yml
---
# config/roles/mongodb/vars/main.yml
# MongoDB service handlers

- name: restart mongod
  service: name=mongod state=restarted
EOF

Since there are no surprises in the configuration for RabbitMQ, we will not go into it here. However, the full config is available in the repo for reference.

Ansible Plays for the Application

In order to follow the 12-factor app pattern, we would like to have all of our configuration stored in environment variables. Our app is already set up to read from HTTP_PORT, MONGODB_URI and RABBITMQ_URI. We'll just write a simple task to add those variables to the vagrant user's login shell.

---
# config/roles/clojure_chat/tasks/main.yml
# Add application environment variables

- name: Add env variables
  template: src=app_env.j2 dest=/home/vagrant/.app_env owner=vagrant group=vagrant

- name: Include env variables in vagrant's login shell
  shell: echo ". /home/vagrant/.app_env" >> /home/vagrant/.bash_profile

And we'll go ahead and create the template that defines these variables:

# config/roles/clojure_chat/templates/app_env.j2
# {{ ansible_managed }}

export HTTP_PORT="{{ backend_port }}"

# We are including the auth mechanism because Ansible's mongodb_user
# module creates users with the SCRAM-SHA-1 method by default
export MONGODB_URI="mongodb://{{ mongo_user }}:{{ mongo_password }}@{{ database_ip }}/{{ mongo_database }}?authMechanism=SCRAM-SHA-1"

export RABBITMQ_URI="amqp://{{ rmq_user }}:{{ rmq_password }}@{{ broker_ip }}:5672{% if rmq_vhost == '/' %}/%2f{% else %}{{ rmq_vhost }}{% endif %}"

Verify the Configuration

We just wrote a lot of configuration, but now we have a fully-reproduceable development environment for a Clojure application that utilizes a messaging queue and a noSQL database.

Let's make sure that everything works by provisioning again and firing up the repl.

vagrant provision
vagrant ssh app
cd /vagrant
lein repl

The repl will load our clojure-chat.main namespace, where we can start up the server.

;; REPL
clojure-chat.main=> (-main)

Now a web server should be running inside our VM on port 3000. We can check it out by visiting http://10.0.15.12:3000/ in a browser.

At this point you have successfully set up a complete, multi-machine development environment for Clojure using Ansible!

This article is brought with ❤ to you by Semaphore.

by Andrew Meredith at July 02, 2015 08:52 AM

StackOverflow

Functional Language for (biomedical) Engineering [on hold]

After learning Scala for a bit, I've decided to learn a functional language. However, Scala might not be the ideal language for my field (Biomedical Engineering).

So: What functional languages are most used and/or most practical (and what looks good on a CV), in the area of Biomedical Engineering? Specifically for Bioinformatics- and imaging.

by BigBadWolf at July 02, 2015 08:50 AM

Correct form of letrec in Hindley-Milner type system?

I'm having trouble understanding the letrec definition for HM system that is given on Wikipedia, here: https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner_type_system#Recursive_definitions

For me, the rule translates roughly to the following algorithm:

  1. infer types on everything in the letrec definition part
    1. assign temporary type variables to each defined identifier
    2. recursively process all definitions with temporary types
    3. in pairs, unify the results with the original temporary variables
  2. close (with forall) the inferred types, add them to the basis (context) and infer types of the expression part with it

I'm having trouble with a program like this:

letrec
 p = (+)     --has type Uint -> Uint -> Uint
 x = (letrec
       test = p 5 5
     in test)
in x

The behavior I'm observing is as follows:

  • definition of p gets temporary type a
  • definition of x gets some temporary type too, but that's out of our scope now
  • in x, definition of test gets a temporary type t
  • p gets the temporary type a from the scope, using the HM rule for a variable
  • (f 5) gets processed by HM rule for application, resulting type is b (and the unification that (a unifies with Uint -> b)
  • ((p 5) 5) gets processed by the same rule, resulting in more unifications and type c, a now in result unifies with Uint -> Uint -> c
  • now, test gets closed to type forall c.c
  • variable test of in test gets the type instance (or forall c.c) with fresh variables, accrodingly to the HM rule for variable, resulting in test :: d (that is unified with test::t right on)
  • resulting x has effectively type d (or t, depending on the mood of unification)

The problem: x should obviously have type Uint, but I see no way those two could ever unify to produce the type. There is a loss of information when the type of test gets closed and instance'd again that I'm not sure how to overcome or connect with substitutions/unifications.

Any idea how the algorithm should be corrected to produce x::Uint typing correctly? Or is this a property of HM system and it simply will not type such case (which I doubt)?

Note that this would be perfectly OK with standard let, but I didn't want to obfuscate the example with recursive definitions that can't be handled by let.

Thanks in advance

by exa at July 02, 2015 08:34 AM

/r/netsec

StackOverflow

Running selenium tests in different browsers and different versions

I am using Scala Language + Selenium WebDriver + Cucumber Framework I want to run my selenium tests in different browsers and importantly in selected different versions. As i cant use any tool for cross browser testing, like sauce labs or browser stack, is there any way to do this? My question may sound silly, but I couldn't find useful information when i browsed.

Your answer is very helpful.

by priyanka at July 02, 2015 08:20 AM

Play2 Framework - Scala - Silhouette check token manually

I'm using Play Framework with Scala to build a RESTful API. To implement authentication I'm using the play-silhouette plugin, with the use of a BearerTokenAuthenticator. It works perfectly.

The problem is that I must implement a service that use a WebSocket to push real-time updates, but I can't manage to setup user authentication for this method.

Silhouette provides support to do that (doc), the problem is that I can't find a way to put the token in the header of the websocket handshake requests. I did a lot of research, but without any result.

I thought that I could pass the token in the query string, instead of passing it in the request header.

My question is, how can I validate a token manually with silhouette?

by tano at July 02, 2015 08:04 AM

Spray-json deserializing nested object

How to deserialize nested objects correctly in spray-json?

    import spray.json._

    case class Person(name: String)

    case class Color(n: String, r: Int, g: Int, b: Int, p: Person)

    object MyJsonProtocol extends DefaultJsonProtocol {

      implicit object ColorJsonFormat extends RootJsonFormat[Color] {
        def write(c: Color) = JsObject(
          "color-name" -> JsString(c.n),
          "Green" -> JsNumber(c.g),
          "Red" -> JsNumber(c.r),
          "Blue" -> JsNumber(c.b),
          "person-field" -> JsObject("p-name" -> JsString(c.p.name))
        )

        def read(value: JsValue) = {
          value.asJsObject.getFields("color-name", "Red", "Green", "Blue", "person-field") match {
            case Seq(JsString(name), JsNumber(red), JsNumber(green), JsNumber(blue), JsObject(person)) =>
              Color(name, red.toInt, green.toInt, blue.toInt, null) //gotta replace null with correct deserializer
            case _ => throw new DeserializationException("Color expected")
          }
        }
      }

    }

    import MyJsonProtocol._

    val jsValue = Color("CadetBlue", 95, 158, 160, Person("guest")).toJson

    jsValue.prettyPrint

    val color = jsValue.convertTo[Color] //person is missing of course

On a side-note, how to spray-json help serializing to a map of fields (with nested map for nested objects)?

by user3103600 at July 02, 2015 08:03 AM

Building scala with SBT to make JAR and folder with dependencies

UPD at last I conquered (it seems so) the sbt-assembly approach, so now the question is not as urgent (though I'm still curious to extend my knowledge of sbt using).

I have a project in Scala (a kind of test utility) which is currently used only in sbt run way. However for certain demo I want to prepare it in a form which does not require sbt or scala preinstalled (only JVM).

First I've tried to use sbt-assembly plugin but soon get lost fighting with duplicate entries. So now I'm curious whether I can simply compile it to:

  • single jar-file containing application itself;
  • and lib directory containing raw set of dependency jars.

I hope that in such case it would be easy to run with the help of Main-Class and Class-Path: ./lib/* fields in the manifest - am I wrong? If this is correct, how can I achieve this?

Thanks in advance!

by Rodion Gorkovenko at July 02, 2015 07:53 AM

Serlaize case class with the variable inside it in scala with jackson

i am tiring to serialize a case class using jackson fasterxml, i can see the constructor parameters after deserialize (taskRequest and taskNameIn) but not the variables inside the class (jobsRequests is null for example):

//@JsonIgnoreProperties(ignoreUnknown = true) // tried to remove it with no luck
@JsonAutoDetect
case class Job(taskRequest: List[TaskRequest] = Nil,taskNameIn:String) {
{
this.jobsRequests = taskRequest
    this.taskName= taskNameIn
}
@JsonProperty
@volatile private var jobsRequests: List[TaskRequest] = Nil 

@JsonProperty
var task_name: String = ""

}

Any suggestions ?

by user1120007 at July 02, 2015 07:49 AM

TheoryOverflow

Linear diophantine equation in non-negative integers

There's only very little information I can find on the NP-complete problem of solving linear diophantine equation in non-negative integers. That is to say, is there a solution in non-negative $x_1,x_2, ... , x_n$ to the equation $a_1 x_1 + a_2 x_2 + ... + a_n x_n = b$, where all the constants are positive? The only noteworthy mention of this problem that I know of is in Schrijver's Theory of Linear and Integer Programming. And even then, it's a rather terse discussion.

So I would greatly appreciate any information or reference you could provide on this problem.

There are two questions I mostly care about:

  1. Is it strongly NP-Complete?
  2. Is the related problem of counting the number of solutions #P-hard, or even #P-complete?

by 4evergr8ful at July 02, 2015 07:40 AM

StackOverflow

triggering kevent by force

I'm using kqueue for socket synchronization in OS X. I can register an event of interest like the following:

struct kevent change;
EV_SET(&change, connected_socket, EVFILT_READ, EV_ADD, 0, NULL, NULL);
kevent(k_queue_, &change, 1, NULL, 0, NULL);

And the question is, is there a way to trigger this event by force so that the waiting kevent call would return?

by Kay at July 02, 2015 07:40 AM

Finding 10001th primenumber using scala stream and one line function [on hold]

I want to write program for finding 10001th prime number using Scala. For this I want to use stream and every function should be in one line.

I got the answer now..

object NthPrimeNumber {

def isFactor(num: Int, divisor: Int) = num % divisor == 0

def isPrime(num: Int) = num % 2 !=0 && ((3 until num by 2) forall(divisor => !isFactor(num, divisor)))

val primes: Stream[Int] = 2 #:: Stream.from(3).filter(isPrime)

def NthPrimeNumber(n: Int): Int = { if( n > 1 ) primes take n last else throw new InvalidNumberException }

}

by Uma Maheswara Rao Pinninti at July 02, 2015 07:34 AM

Compile error when using a companion object of a case class as a type parameter

I'm create a number of json messages for spray in scala using case classes. For example:

  case class Foo(name: String, attrs: List[String])
  implicit val fooFormat = jsonFormat2(Foo)
  object Foo {
    case class Invalid(error: String)
  }
  case class Bar(name: String, kv: Map[String, String])
  implicit val barFormat = jsonFormat2(Bar)

In the above snippet, barFormat compiles, but fooFormat does not:

type mismatch; found : Foo.type required: (?, ?) => ? 
 Note: implicit value barFormat is not applicable here because it comes 
 after the application point and it lacks an explicit result type

I don't want to use barFormat in place of fooFormat, and I understand that a case class automatically generates a companion object, but I don't understand why there's a compiler error here, and the error message is difficult for me to decipher. Does anyone know what the problem is here and how to fix it, preferably without removing my Foo companion object?

by jonderry at July 02, 2015 07:33 AM

QuantOverflow

Range options in BS

I know how barrier options are priced in Black-Scholes scheme.

I'm wondering if an analytical formula exists also for range (corridor) digital options i.e. options paying only if the price remains between an up and an out barrier.

I think that if the joint distribution for the minimum and the max of a Wiener is known, an analytical pricing formula should exists. I didn't find it in the literature.

by jimifiki at July 02, 2015 07:17 AM

Factoring risk premium in to Forward Rate calculation

This is a self study question. I'm calculating a forward rate.

Specifically, I have that in a country X, the Spot Rate is 5X/1US. I also have that the 1 year interest rate is 13% in country X and inflation is 12%. The US interest rate is 4% with 3% inflation.

I'm computing the forward rate as:

$F= S(1+i_d)/(1+i_f) = 5 *(1+.04)/(1+.13) = 4.602.$

However I'm also told that X's market risk premium is 300 basis points above US treasuries. I'm unsure how to factor that in....

by user1357015 at July 02, 2015 07:17 AM

Reflection Principle

Let $(\Omega,\mathcal{F},P)$ be a probability space and $\{W_t ∶ t ≥ 0\}$ be a standard Wiener process. By setting $\tau$ as a stopping time and defining \begin{align} W^*(t)=\Big\{\matrix{W_t\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,,t\leq\tau\cr2 W_{\tau}-W_t\,\,,\,t>\tau} \end{align} Why $W^*(t)$ is standard Wiener process? I want to solve it by Reflection Principle.is it Correct?Please help me

by John kent at July 02, 2015 07:14 AM

StackOverflow

Spark Runtime Error - ClassDefNotFound: SparkConf

After installing and building Apache Spark (albeit with quite a few warnings), the compilation of our Spark application (using "sbt package") completes successfully. However, when trying to run our application using the spark-submit script, a runtime error results that states that the SparkConf class definition was not found. The SparkConf.scala file is present on our system, but it seems as if it is not being built correctly. Any ideas on how to solve this?

user@compname:~/Documents/TestApp$ /opt/Spark/spark-1.4.0/bin/spark-submit --master local[4] --jars /opt/Spark/spark-1.4.0/jars/elasticsearch-hadoop-2.1.0.Beta2.jar target/scala-2.11/sparkesingest_2.11-1.0.0.jar ~/Desktop/CSV/data.csv es-index localhost
Warning: Local jar /opt/Spark/spark-1.4.0/jars/elasticsearch-hadoop-2.1.0.Beta2.jar does not exist, skipping.
log4j:WARN No appenders could be found for logger (App).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
15/07/01 13:56:58 INFO SparkContext: Running Spark version 1.4.0
15/07/01 13:56:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/07/01 13:56:59 WARN Utils: Your hostname, compname resolves to a loopback address: 127.0.1.1; using [IP ADDRESS] instead (on interface eth0)
15/07/01 13:56:59 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
15/07/01 13:56:59 INFO SecurityManager: Changing view acls to: user
15/07/01 13:56:59 INFO SecurityManager: Changing modify acls to: user
15/07/01 13:56:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(user); users with modify permissions: Set(user)
15/07/01 13:56:59 INFO Slf4jLogger: Slf4jLogger started
15/07/01 13:56:59 INFO Remoting: Starting remoting
15/07/01 13:56:59 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@[IP ADDRESS]]
15/07/01 13:56:59 INFO Utils: Successfully started service 'sparkDriver' on port 34276.
15/07/01 13:56:59 INFO SparkEnv: Registering MapOutputTracker
15/07/01 13:56:59 INFO SparkEnv: Registering BlockManagerMaster
15/07/01 13:56:59 INFO DiskBlockManager: Created local directory at /tmp/spark-c206e297-c2ef-4bbf-9bd2-de642804bdcd/blockmgr-8d273f32-589e-4f55-98a2-cf0322a05d45
15/07/01 13:56:59 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/07/01 13:56:59 INFO HttpFileServer: HTTP File server directory is /tmp/spark-c206e297-c2ef-4bbf-9bd2-de642804bdcd/httpd-f4c3c67a-d058-4aba-bd65-5352feb5f12e
15/07/01 13:56:59 INFO HttpServer: Starting HTTP Server
15/07/01 13:56:59 INFO Utils: Successfully started service 'HTTP file server' on port 33599.
15/07/01 13:56:59 INFO SparkEnv: Registering OutputCommitCoordinator
15/07/01 13:56:59 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/07/01 13:56:59 INFO SparkUI: Started SparkUI at http://[IP ADDRESS]:4040
15/07/01 13:57:00 ERROR SparkContext: Jar not found at file:/opt/Spark/spark-1.4.0/jars/elasticsearch-hadoop-2.1.0.Beta2.jar
15/07/01 13:57:00 INFO SparkContext: Added JAR file:/home/user/Documents/TestApp/target/scala-2.11/sparkesingest_2.11-1.0.0.jar at http://[IP ADDRESS]:33599/jars/sparkesingest_2.11-1.0.0.jar with timestamp 1435784220028
15/07/01 13:57:00 INFO Executor: Starting executor ID driver on host localhost
15/07/01 13:57:00 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44746.
15/07/01 13:57:00 INFO NettyBlockTransferService: Server created on 44746
15/07/01 13:57:00 INFO BlockManagerMaster: Trying to register BlockManager
15/07/01 13:57:00 INFO BlockManagerMasterEndpoint: Registering block manager localhost:44746 with 265.4 MB RAM, BlockManagerId(driver, localhost, 44746)
15/07/01 13:57:00 INFO BlockManagerMaster: Registered BlockManager
15/07/01 13:57:00 INFO MemoryStore: ensureFreeSpace(143840) called with curMem=0, maxMem=278302556
15/07/01 13:57:00 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 140.5 KB, free 265.3 MB)
15/07/01 13:57:00 INFO MemoryStore: ensureFreeSpace(12635) called with curMem=143840, maxMem=278302556
15/07/01 13:57:00 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.3 KB, free 265.3 MB)
15/07/01 13:57:00 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:44746 (size: 12.3 KB, free: 265.4 MB)
15/07/01 13:57:00 INFO SparkContext: Created broadcast 0 from textFile at Ingest.scala:159
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/SparkConf
    at org.elasticsearch.spark.rdd.CompatUtils.<clinit>(CompatUtils.java:20)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:274)
    at org.elasticsearch.hadoop.util.ObjectUtils.loadClass(ObjectUtils.java:71)
    at org.elasticsearch.spark.package$.<init>(package.scala:14)
    at org.elasticsearch.spark.package$.<clinit>(package.scala)
    at build.Ingest$.main(Ingest.scala:176)
    at build.Ingest.main(Ingest.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.spark.SparkConf
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 17 more
15/07/01 13:57:00 INFO SparkContext: Invoking stop() from shutdown hook
15/07/01 13:57:00 INFO SparkUI: Stopped Spark web UI at http://[IP ADDRESS]:4040
15/07/01 13:57:00 INFO DAGScheduler: Stopping DAGScheduler
15/07/01 13:57:00 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
15/07/01 13:57:00 INFO Utils: path = /tmp/spark-c206e297-c2ef-4bbf-9bd2-de642804bdcd/blockmgr-8d273f32-589e-4f55-98a2-cf0322a05d45, already present as root for deletion.
15/07/01 13:57:00 INFO MemoryStore: MemoryStore cleared
15/07/01 13:57:00 INFO BlockManager: BlockManager stopped
15/07/01 13:57:01 INFO BlockManagerMaster: BlockManagerMaster stopped
15/07/01 13:57:01 INFO SparkContext: Successfully stopped SparkContext
15/07/01 13:57:01 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
15/07/01 13:57:01 INFO Utils: Shutdown hook called
15/07/01 13:57:01 INFO Utils: Deleting directory /tmp/spark-c206e297-c2ef-4bbf-9bd2-de642804bdcd

Here is the build.sbt file:

scalaVersion := "2.11.6"

name := "SparkEsIngest"

version := "1.0.0"

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % "1.4.0" % "provided",
    "org.apache.spark" %% "spark-streaming" % "1.4.0" % "provided",
    "org.apache.spark" %% "spark-sql" % "1.4.0" % "provided",
    "org.elasticsearch" % "elasticsearch-hadoop" % "2.1.0.Beta2" exclude("org.spark-project.akka", "akka-remote_2.10") exclude("org.spark-project.akka", "akka-slf4j_2.10") exclude("org.json4s", "json4s-ast_2.10") exclude("org.apache.spark", "spark-catalyst_2.10") exclude("com.twitter", "chill_2.10") exclude("org.apache.spark", "spark-sql_2.10") exclude("org.json4s", "json4s-jackson_2.10") exclude("org.json4s", "json4s-core_2.10") exclude("org.apache.spark", "spark-core_2.10")
  )

if ( System.getenv("QUERY_ES_RESOURCE") != null) {
  println("[info] Using lib/es-hadoop-build-snapshot/ unmanagedBase dir")
  unmanagedBase <<= baseDirectory { base => base / "lib/es-hadoop-build-snapshot" }
} else {
  println("[info] Using lib/ unmanagedBase dir")
  unmanagedBase <<= baseDirectory { base => base / "lib" }
}

resolvers += "conjars.org" at "http://conjars.org/repo"

resolvers += "clojars" at "https://clojars.org/repo"

by kgrimes2 at July 02, 2015 06:59 AM

Possible issues of installing OpenJDK 8 on Ubuntu 12

What are risks/possible issues of installing OpenJDK 8 from PPA (https://launchpad.net/~openjdk-r/+archive/ubuntu/ppa) on Ubuntu 12.04.5 LTS? I understand that OpenJDK 8 never will be in official Ubuntu 12 repository. But what are consequences of installing it form other repositories?

by robert at July 02, 2015 06:26 AM

Lobsters

Incremental Computation with Adapton / Matthew A Hammer

So, this is sort of a talk by Matthew Hammer introducing people to incremental computation, based on his doctoral work under kind of Umut Acar (the self-adjusting computaiton guy) but extending it to avoid recomputation in many more situations, by using partial ordering constraints on the computation instead of sort of total ordering. Okay, so this is sort of helping me to understand Acar’s dissertation, which I’ve been finding somewhat tough going, even though I feel like the dissertation is sort of pretty well explained, and this talk is sort of driving me nuts with the, uh, hesitation noises and filler, right?

Okay, but, so, this talk is really helping me put Acar’s dissertation in context, and also kind of gives me an overview of what Hammer is doing, including in some languages like OCaml and Python, which, again, by the way, I care a lot more about than Standard ML or CEAL. So, the other thing is, I think this talk is well worth watching, especially with mplayer -af scaletempo so you can sort of speed it up to 1.6× to compensate for the, uh, hesitation noises, right?

Comments

by kragen at July 02, 2015 06:06 AM

/r/emacs

Ask /r/emacs: Which wiki softwares fit the needs of a small organization and have good org-mode integrations?

org-mode is really great for personal wiki, but I think most people in my org are not that much into technical stuff. Which wiki would you recommend for a small org that can be integrated with emacs org-mode?
Thanks for sharing your opinions.

submitted by chocolait
[link] [2 comments]

July 02, 2015 06:05 AM

CompsciOverflow

How do I see SSL/TLS packet data in clear? [on hold]

Hey all I have a weird question (first time poster so forgive me if this is super newb)...

I am wanting to see the inner workings of a SSL/TLS connection between a client and a server at the packet level (in the clear) in order to learn. Is there a way to do this where I will see all of the traffic including the handshake?

Should I just setup a webserver that uses SSL/TLS and then use ssl strip & tcpdump to grab the packet capture? Will that even work in order to grab the full connection in the clear?

I am not super familiar with the SSL/TLS stack and I would like to do this in order to learn how it works at a lower level.

by Mike Johnson at July 02, 2015 06:00 AM

/r/freebsd

UnixOverflow

FreeBSD pkg does not find any packages; where to start debuging

I installed the latest FreeBSD 11 snapshot on a RPI2. Installing/making a package from ports works fine.

pkg upgrade

works fine as well, but

pkg search nano

does not find any packages. I come from Debian. Where do I start finding the problem?

I already deleted /var/db/pkg/repo-FreeBSD.sqlite and ran

pkg upgrade

again. But this didn't change anything.

Added as requested:

pkg -vv
Version                 : 1.5.2
PKG_DBDIR = "/var/db/pkg";
PKG_CACHEDIR = "/var/cache/pkg";
PORTSDIR = "/usr/ports";
INDEXDIR = "";
INDEXFILE = "INDEX-11";
HANDLE_RC_SCRIPTS = false;
DEFAULT_ALWAYS_YES = false;
ASSUME_ALWAYS_YES = false;
REPOS_DIR [
    "/etc/pkg/",
    "/usr/local/etc/pkg/repos/",
]
PLIST_KEYWORDS_DIR = "";
SYSLOG = true;
ABI = "FreeBSD:11:armv6";
ALTABI = "freebsd:11:armv6:32:el:eabi:softfp";
DEVELOPER_MODE = false;
VULNXML_SITE = "http://vuxml.freebsd.org/freebsd/vuln.xml.bz2";
FETCH_RETRY = 3;
PKG_PLUGINS_DIR = "/usr/local/lib/pkg/";
PKG_ENABLE_PLUGINS = true;
PLUGINS [
]
DEBUG_SCRIPTS = false;
PLUGINS_CONF_DIR = "/usr/local/etc/pkg/";
PERMISSIVE = false;
REPO_AUTOUPDATE = true;
NAMESERVER = "";
EVENT_PIPE = "";
FETCH_TIMEOUT = 30;
UNSET_TIMESTAMP = false;
SSH_RESTRICT_DIR = "";
PKG_ENV {
}
PKG_SSH_ARGS = "";
DEBUG_LEVEL = 0;
ALIAS {
    all-depends = "query %dn-%dv";
    annotations = "info -A";
    build-depends = "info -qd";
    cinfo = "info -Cx";
    comment = "query -i \"%c\"";
    csearch = "search -Cx";
    desc = "query -i \"%e\"";
    download = "fetch";
    iinfo = "info -ix";
    isearch = "search -ix";
    prime-list = "query -e '%a = 0' '%n'";
    leaf = "query -e '%#r == 0' '%n-%v'";
    list = "info -ql";
    noauto = "query -e '%a == 0' '%n-%v'";
    options = "query -i \"%n - %Ok: %Ov\"";
    origin = "info -qo";
    provided-depends = "info -qb";
    raw = "info -R";
    required-depends = "info -qr";
    roptions = "rquery -i \"%n - %Ok: %Ov\"";
    shared-depends = "info -qB";
    show = "info -f -k";
    size = "info -sq";
}
CUDF_SOLVER = "";
SAT_SOLVER = "";
RUN_SCRIPTS = true;
CASE_SENSITIVE_MATCH = false;
LOCK_WAIT = 1;
LOCK_RETRIES = 5;
SQLITE_PROFILE = false;
WORKERS_COUNT = 0;
READ_LOCK = false;
PLIST_ACCEPT_DIRECTORIES = false;
IP_VERSION = 0;
AUTOMERGE = true;
VERSION_SOURCE = "";
CONSERVATIVE_UPGRADE = true;
PKG_CREATE_VERBOSE = false;


Repositories:
  FreeBSD: { 
    url             : "pkg+http://pkg.FreeBSD.org/FreeBSD:11:armv6/latest",
    enabled         : yes,
    priority        : 0,
    mirror_type     : "SRV",
    signature_type  : "FINGERPRINTS",
    fingerprints    : "/usr/share/keys/pkg"
  }

by Jodka Lemon at July 02, 2015 05:47 AM

StackOverflow

Does [Future].tryCompleteWith([Future]) interfere with garbage collection?

I have a Java/Akka program that is using Scala futures to manage a batch job. It is intended to be a long-running program, and I'm concerned that my promises/futures may be leaking memory:

I have a Promise<String> fatalError that typically exists for the lifetime of the program, and that when failed will terminate the current batch and notify an administrator that something has gone wrong.

Each batch has its own Promise<String> completion that when failed will terminate the current batch; this promise can either be failed through a CancellationException (in which case the current job is terminated but the system is still able to accept new jobs), or else via completion.tryCompleteWith(fatalError.future()) (in which case the current job is terminated and the system goes on standby until an administrator intervenes). If the batch executes successfully then completion is filled with a Success<String>([final job status]).

Each batch executes several stored procedures against a database, and each execution gets its own Promise<String> cancellation that when failed will cancel the database query - this can be failed via cancellation.tryCompleteWith(completion.future()). When the stored procedure completes I call cancellation.succes("success") to complete the promise.

This means that I'm generating a lot of Promises that only exist for the lifetime of a batch or of a database query after which they go out of scope, but that are linked to a long-lived Promise via tryCompleteWith. When these promises are fulfilled with Success messages then will this terminate their link to the long-lived promise since tryCompleteWith can no longer fulfill the promise, or am I looking at a potential memory leak?

// long-lived prommise that will hopefully never be fulfilled
Promise<String> fatalError = Futures.promise();

// per-batch promise - we'd usually run about a dozen batch jobs per day
try {
  Promise<String> completion = Futures.promise();
  completion.tryCompleteWith(fatalError.future());
} finally {
  // fulfill promise before it goes out of scope
  completion.trySuccess(getStatusMessage());
}

// per-query promise - each batch can have up to a few thousand queries
try {
  Promise<String> cancellation = Futures.promise();
  cancellation.tryCompleteWith(completion.fture());
} finally {
  // fulfill promise before it goes out of scope
  cancellation.trySuccess("Success");
}

by Zim-Zam O'Pootertoot at July 02, 2015 05:25 AM

Sequentially updating columns of a Matrix RDD

I'm having philosophical issues with RDDs used in mllib.linalg. In numerical linear algebra one wants to use mutable data structure but since in Spark everything (RDDs) is immutable, I'd like to know if there's a way around this, specifically for the following situation I'm dealing with;

import org.apache.spark.mllib.linalg._
import breeze.numerics._

val theta = constants.Pi / 64
val N = 1000
val Gk: Matrix = Matrices.dense(2, 2, Array(
                               cos(theta), sin(theta),
                               -sin(theta), cos(theta))
                               )
val x0: Vector = Vectors.dense(0.0, 1.0)
var xk = DenseMatrix.zeros(2, N + 1)

Sequentially thinking, I'd like to access/update the first column of xk by x0, where normally in scala/breeze is done by xk(::, 0) := x0, and other columns by

for (k <- 0 to N - 1) {
    xk(::, k + 1) := Gk * xk(::, k)
}

but in mllib.linalg.Matrices there's no (apply like!) method defined for it here. Is just accessing a column (row) against immutability? what if I use RowMatrix? can I access/update rows then?

My matrices can be local (like above) or distributed and I'd like to know in general, if the above process can be done in a functional way.

I'd appreciate any comment or help.

by Ehsan M. Kermani at July 02, 2015 05:22 AM

Android + Scala + Intellij 13

Android is great platform. Scala is great language. Intellij Idea is great IDE.

How all of them can work together?

Note: It's a self answer. But if you have more info, please share it here.

by Markus Marvell at July 02, 2015 05:16 AM

Translate this code in functional javascript (aggregate 2 array in a map)

I've just read about reactive programming and I am enthusiast about it. So I decided to revise my skill on functional programming. I don't know if this is the right place.

I have two array, one of tags and one of tasks that contains tags. I'd like to aggregate the two and go out with a tasksByTagName. I've tried to use lodash but I didn't managed to do it in a readable way so I'm posting what I've done with normal for statements.

More over I'm interested in understanding how to think in a flow based way, such as thinking my aggregate function as a transformation between two stream, as the reactive programming article linked above suggest.

So, let's start with datas:

var tags = [ 
  { id: 1, name: 'Week 26' },
  { id: 2, name: 'week 27' },
  { id: 3, name: 'Week 25' },
  { id: 4, name: 'Week 25' }
];

var tasks = [
  {
    "name": "bar",
    "completed": false,
    "tags": [
      { "id": 1 },
      { "id": 2 }
    ]
  },
  {
    "name": "foo",
    "completed": true,
    "tags": [
      { "id": 1 }
    ]
  },
  {
    "name": "dudee",
    "completed": true,
    "tags": [
      { "id": 3 },
      { "id": 1 },
      { "id": 4 }
    ]
  }
];

And this is my piece of code:

var _ = require('lodash');

function aggregate1(tasks, tags) {

  var tasksByTags = {};
  for(var t in tasks) {
    var task = tasks[t];
    for(var i=0; i<task.tags.length; ++i) {
      var tag = task.tags[i];
      if( !tasksByTags.hasOwnProperty(tag.id) ) {
        tasksByTags[tag.id] = [];  
      }
      tasksByTags[tag.id].push(task.name);
    }
  }

  var tagById = {};
  for(var t in tags) {
    var tag = tags[t];
    tagById[tag.id] = tag.name;
  }

  var tasksByTagsName = _.mapKeys(tasksByTags, function(v, k) {
    return tagById[k];
  })

  return tasksByTagsName;
}

module.exports.aggregate1 = aggregate1;

For completeness, this is also the test with test data:

var tags = [ 
  { id: 1, name: 'Week 26' },
  { id: 2, name: 'week 27' },
  { id: 3, name: 'Week 25' },
  { id: 4, name: 'Week 25' }
];

var tasks = [
  {
    "name": "bar",
    "completed": false,
    "tags": [
      { "id": 1 },
      { "id": 2 }
    ]
  },
  {
    "name": "foo",
    "completed": true,
    "tags": [
      { "id": 1 }
    ]
  },
  {
    "name": "dudee",
    "completed": true,
    "tags": [
      { "id": 3 },
      { "id": 1 },
      { "id": 4 }
    ]
  }
];

var goodResults1 = {
  'Week 26': [ 'bar', 'foo', 'dudee' ],
  'week 27': [ 'bar' ],
  'Week 25': [ 'dudee' ] 
};    



var assert = require('assert')

var aggregate1 = require('../aggregate').aggregate1;

describe('Aggegate', function(){
  describe('aggregate1', function(){
    it('should work as expected', function(){
      var result = aggregate1(tasks, tags);
      assert.deepEqual(goodResults1, result);
    })
  })
})

by nkint at July 02, 2015 05:10 AM

Wondermark

Check out: Crawdads Welcome

crawdads

I came across this comic series when someone mistook it for Wondermark. It’s so lovely! And hand-drawn, which Wondermark isn’t really…I mean, someone drew it, but not me. Wondermark is A COLLABORATION WITH THE DEAD

The above is just a single panel, check out the whole series on Tumblr: Crawdads Welcome, a comic strip about animals by Ezra Butt.

by David Malki at July 02, 2015 04:50 AM

/r/compsci

Need Help With A Dynamic Programming Problem!

I just made a fairly nice write-up of the problem on stack overflow, so here's the link. Any help at all would be amazing. Also, feel free to ask for clarification. I realize I'm pretty bad at explaining all of this, and it's a complicated problem to wrap one's head around.

submitted by Cellax
[link] [2 comments]

July 02, 2015 04:12 AM

StackOverflow

Convert shapeless nested HLists to tree

I am trying to use shapeless to convert from an HList which could potentially have other nested HLists to an explicit tree representation.

Given a tree structure:

sealed trait Tree[T]
case class Node(children:List[Tree[_]])
case class Leaf[T](value:T) extends Tree[T]

I want to make the following tests pass:

"Hlist to tree" should "return nodes for unnested" in {
    val repr = (2 :: "Two" :: HNil)

    treeifyHlist(repr).head should be(Leaf(2))
    treeifyHlist(repr).tail.head should be(Leaf("Two"))
  }

  "Hlist to tree" should "work for nested" in {
    val repr = (2 :: (1 :: 2 :: 3 :: HNil) :: HNil)

    treeifyHlist(repr).head should be(Leaf(2))
    treeifyHlist(repr).tail.head should be(Node(List(Leaf(1), Leaf(2), Leaf(3))))
  }

The first test is fairly straightforward:

  object treeify extends Poly1 {
    implicit def caseInt = at[Int](x =>Leaf(x))
    implicit def caseString = at[String](x =>Leaf(x))
  }

  def treeifyHlist[L <: HList](l: L)(implicit m: ops.hlist.Mapper[treeify.type, L]): m.Out = l map treeify

But when doing the nested case, I bump into problems:

implicit def caseNestedHlist[L <: HList](implicit mapper: ops.hlist.Mapper[treeify.type, L]) = at[L] {x =>
  val subtree= (x map treeify)
  subtree.toList
}

gives me the compile error:

could not find implicit value for parameter toTraversableAux: shapeless.ops.hlist.ToTraversable.Aux[mapper.Out,List,Lub]

Trying to add that as another implicit parameter:

implicit def caseNestedHlist[L <: HList](implicit mapper: ops.hlist.Mapper[treeify.type, L], toTraversableAux: ToTraversable.Aux[mapper.Out,List,Lub]) = ...

gives the error: illegal dependent method type: parameter appears in the type of another parameter in the same section or an earlier one

I am pretty much stumped here. Is there a way of fixing this? Is what I am trying to do possible?

by triggerNZ at July 02, 2015 04:12 AM

Planet Clojure

Mobile Transit FTW

Transit was released by Cognitect last summer. At the time, my impression was that there is actually “meat” behind this library in that it

  1. addresses some inherent problems related to the anemic type system of JSON, and
  2. is fast, using native JSON parsers

Apart from that, I had no real need to look into it, until now. Here's my success story; I hope it helps.




Recently I've been working on Replete, a bootstrapped ClojureScript REPL iOS app, and one problem was that launch could be slow (up to 40 seconds or so) on older iOS devices, owing primarily to the length of time spent reading edn files containing cached compiler metadata.

David Nolen suggested evaluating Transit as a faster alternative. Before doing that, I did the easy test of simply inlining the compiler metadata directly into the Replete source. (An aside: This was trivial, given the data is readable Clojure value literals. Copy-n-paste, baby!)

The inlined variant improved things on an iPad 3, reducing the 30 seconds spent setting up the compiler metadata to only 4 seconds. While this felt like a hack, it could effectively solve the launch performance problem.

Then I updated Replete to use Transit instead. First, I checked that JavaScriptCore does indeed have the needed JSON.parse functionality. (This is part of what makes the Transit approach fast—it is a bit of built-in native functionality.) Then I updated Replete's build script to convert the edn files to Transit files, and I replaced Replete's launch logic that reads edn with the analog for loading Transit. (The Transit APIs are wonderfully simple—I invested no time in “learning” them, and just went with a few lines from READMEs.)

In the end, for this use case, Transit ended up being read in just as fast as the approach using inlined ClojureScript data literals, thus being a perfectly suitable solution to the problem. On top of that, Replete no longer needs to bundle the original edn, and instead can bundle Transit, which is about 14% smaller for my use case.

Cool stuff. I'm now convinced of Transit, both in its speed and ease of use!

by Mike Fikes at July 02, 2015 04:00 AM

QuantOverflow

Why is the VIX futures market usually in a state of contango?

I'm a VIX newbie and I'm trying to understand why the VIX futures market is usually in a state of contango.

All I can figure is that the sellers of VIX futures contracts demand high "prices" (because the seller is the holder of the short position and makes money when the price falls), and since there are willing buyers, namely ETNs and ETFs that are trying to track the S&P 500 VIX SHORT-TERM FUTURES INDEX (SPVIXSTR) through the purchase and sale of VIX futures, the contracts get sold.

Also, the farther out the futures contract expires, the less certain the seller is about what the value of the VIX and the SPVIXSTR will be at the future's expiration date. The greater uncertainty over a longer term results in the seller demanding higher premiums over a longer term than he would demand over a shorter term.

It seems like a prudent buyer of VIX futures wouldn't buy contracts that are priced so high.

It seems like the VIX futures market should be in a state of contango about as frequently as it is in a state of backwardation.

What causes contango in the VIX futures market, and why is it the usual state of the market?

by David at July 02, 2015 03:32 AM

StackOverflow

Append a column to Data Frame in Apache Spark 1.3

Is it possible and what would be the most efficient neat method to add a column to Data Frame?

More specifically, column may serve as Row IDs for the existing Data Frame.

In a simplified case, reading from file and not tokenizing it, I can think of something as below (in Scala), but it completes with errors (at line 3), and anyways doesn't look like the best route possible:

var dataDF = sc.textFile("path/file").toDF() 
val rowDF = sc.parallelize(1 to DataDF.count().toInt).toDF("ID") 
dataDF = dataDF.withColumn("ID", rowDF("ID")) 

by Oleg Shirokikh at July 02, 2015 03:16 AM

QuantOverflow

Conversion of SPY prices to ES prices

I have a system that I use intraday that works great on SPY. Due to the extra leverage available plus other benefits I am thinking about trading the system using ES.

Is there a conversion factor that can be used to convert SPY prices to ES prices? What other factors would I have to take account for before using a system that works well on SPY on the ES?

by James Swinburn at July 02, 2015 02:34 AM

StackOverflow

Getting F-bounded polymorphism to work on a base trait with type parameters?

trait A[T, This[_] <: A[T, This]]
case class B[T]() extends A[T, B]

<console>:8: error: type arguments [T,B] do not conform to trait A's type parameter bounds [T,This[_] <: A[T,This]]
       case class B[T]() extends A[T, B]

This seems odd to me, because it seems like it should work? Guidance welcome...

Thank you

by A Question Asker at July 02, 2015 01:37 AM

arXiv Discrete Mathematics

Counting the Number of Langford Skolem Pairings. (arXiv:1507.00315v1 [cs.DM])

We compute the number solutions to the Langford pairings problem and the Nickerson variant of the problem. These corespond to sequences A014552, A059106 in Sloane's Online Encyclopedia of Integer Sequences. We find that the number of Langford pairing for n=27 equals 111, 683, 611, 098, 764, 903, 232, and n=28 equals 1, 607, 383, 260, 609, 382, 393, 152. The number solutions for the Nickerson variant of Langford pairings for n=24 equals 102, 388, 058, 845, 620, 672, for n=25 equals 1, 317, 281, 759, 888, 482, 688.

by <a href="http://arxiv.org/find/cs/1/au:+Liu_A/0/1/0/all/0/1">Ali Assarpour Amotz Barnoy Ou Liu</a> at July 02, 2015 01:30 AM

A Hidden Signal in the Ulam sequence. (arXiv:1507.00267v1 [math.CO])

The Ulam sequence is defined as $a_1 =1, a_2 = 2$ and $a_n$ being the smallest integer that can be written as the sum of two distinct earlier elements in a unique way. This gives $$1, 2, 3, 4, 6, 8, 11, 13, 16, 18, 26, 28, 36, 38, 47, \dots$$ Virtually nothing is known about the behavior of the sequence. Consecutive differences do not seem to be periodic and can be large, e.g. $a_{18858} - a_{18857} = 315.$ The purpose of this short note is to report the following empirical discovery: there seems to exist a real $\alpha \sim 2.571447499\dots$ such that $$\left\{\alpha a_n: n\in \mathbb{N}\right\} \qquad \mbox{is not uniformly distributed mod} ~2\pi.$$ The distribution function of $\alpha a_n~\mbox{mod}~2\pi$ seems to be supported on an interval of length $\sim 3$ and has a curious shape. Indeed, for the first $10^7$ elements of Ulam's sequence, we have $$ \cos{\left( 2.5714474995~ a_n\right)} < 0 \qquad \mbox{for all}~a_n \notin \left\{2, 3, 47, 69\right\}.$$ We hope that this very rigid structure will eventually provide a way of understanding properties of the Ulam sequence and believe that the arising phenomenon might be of interest in itself.

by <a href="http://arxiv.org/find/math/1/au:+Steinerberger_S/0/1/0/all/0/1">Stefan Steinerberger</a> at July 02, 2015 01:30 AM

From Causes for Database Queries to Repairs and Model-Based Diagnosis and Back. (arXiv:1507.00257v1 [cs.DB])

In this work we establish and investigate connections between causes for query answers in databases, database repairs wrt. denial constraints, and consistency-based diagnosis. The first two are relatively new research areas in databases, and the third one is an established subject in knowledge representation. We show how to obtain database repairs from causes, and the other way around. Causality problems are formulated as diagnosis problems, and the diagnoses provide causes and their responsibilities. The vast body of research on database repairs can be applied to the newer problems of computing actual causes for query answers and their responsibilities. These connections, which are interesting per se, allow us, after a transition -inspired by consistency-based diagnosis- to computational problems on hitting sets and vertex covers in hypergraphs, to obtain several new algorithmic and complexity results for database causality.

by <a href="http://arxiv.org/find/cs/1/au:+Salimi_B/0/1/0/all/0/1">Babak Salimi</a>, <a href="http://arxiv.org/find/cs/1/au:+Bertossi_L/0/1/0/all/0/1">Leopoldo Bertossi</a> at July 02, 2015 01:30 AM

ReCon: Revealing and Controlling Privacy Leaks in Mobile Network Traffic. (arXiv:1507.00255v1 [cs.CR])

Mobile systems have become increasingly popular for providing ubiquitous Internet access; however, recent studies demonstrate that software running on these systems extensively tracks and leaks users' personally identifiable information (PII). We argue that these privacy leaks persist in large part because mobile users have little visibility into PII leaked through the network traffic generated by their devices, and have poor control over how, when and where that traffic is sent and handled by third parties. In this paper, we describe ReCon, a cross-platform system that reveals PII leaks and gives users control over them without requiring any special privileges or custom OSes. Specifically, our key observation is that PII leaks must occur over the network, so we implement our system in the network using a software middlebox built atop the Meddle platform. While this simplifies access to users' network flows, the key challenges for detecting PII from the network perspective are 1) how to efficiently and accurately detect users' PII without knowing a priori what their PII is and 2) whether to block, obfuscate, or ignore the PII leak. To address this, we use a machine learning approach to detect traffic that contains PII, display these behaviors via a visualization tool and let the user decide how the system should act on transmitted PII. We discuss the design and implementation of the system and evaluate its methodology with measurements from controlled experiments and flows from 16 users (19 devices) as part of an IRB-approved user study.

by <a href="http://arxiv.org/find/cs/1/au:+Ren_J/0/1/0/all/0/1">Jingjing Ren</a>, <a href="http://arxiv.org/find/cs/1/au:+Rao_A/0/1/0/all/0/1">Ashwin Rao</a>, <a href="http://arxiv.org/find/cs/1/au:+Lindorfer_M/0/1/0/all/0/1">Martina Lindorfer</a>, <a href="http://arxiv.org/find/cs/1/au:+Legout_A/0/1/0/all/0/1">Arnaud Legout</a>, <a href="http://arxiv.org/find/cs/1/au:+Choffnes_D/0/1/0/all/0/1">David Choffnes</a> at July 02, 2015 01:30 AM

Performance analysis of a Tor-like onion routing implementation. (arXiv:1507.00245v1 [cs.DC])

The current onion routing implementation of Tribler works as expected but throttles the overall throughput of the Tribler system. This article discusses a measuring procedure to reproducibly profile the tunnel implementation so further optimizations of the tunnel community can be made. Our work has been integrated into the Tribler eco-system.

by <a href="http://arxiv.org/find/cs/1/au:+Stokkink_Q/0/1/0/all/0/1">Quinten Stokkink</a>, <a href="http://arxiv.org/find/cs/1/au:+Treep_H/0/1/0/all/0/1">Harmjan Treep</a>, <a href="http://arxiv.org/find/cs/1/au:+Pouwelse_J/0/1/0/all/0/1">Johan Pouwelse</a> at July 02, 2015 01:30 AM

Arbitrarily long relativistic bit commitment. (arXiv:1507.00239v1 [quant-ph])

We consider the recent relativistic bit commitment protocol introduced by Lunghi et al. [Phys. Rev. Lett. 2015] and present a new security analysis against classical attacks. In particular, while the initial complexity of the protocol scaled double-exponentially with the commitment time, our analysis shows that the correct dependence is only linear. This has dramatic implications in terms of implementation: in particular, the commitment time can easily be made arbitrarily long, by only requiring both parties to communicate classically and perform efficient classical computation.

by <a href="http://arxiv.org/find/quant-ph/1/au:+Chakraborty_K/0/1/0/all/0/1">Kaushik Chakraborty</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Chailloux_A/0/1/0/all/0/1">Andr&#xe9; Chailloux</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Leverrier_A/0/1/0/all/0/1">Anthony Leverrier</a> at July 02, 2015 01:30 AM

Implementing generating functions to obtain power indices with coalition configuration. (arXiv:1507.00216v1 [math.OC])

We consider the Banzhaf-Coleman and Owen power indices for weighted majority games modified by a coalition configuration. We present calculation algorithms of them that make use of the method of generating functions. We programmed the procedure in the open language R and it is illustrated by a real life example taken from social sciences.

by <a href="http://arxiv.org/find/math/1/au:+Veiga_J/0/1/0/all/0/1">Jorge Rodr&#xed;guez Veiga</a>, <a href="http://arxiv.org/find/math/1/au:+Flores_G/0/1/0/all/0/1">Guido I. Novoa Flores</a>, <a href="http://arxiv.org/find/math/1/au:+Mendez_B/0/1/0/all/0/1">Balbina V. Casas M&#xe9;ndez</a> at July 02, 2015 01:30 AM

Asymptotic properties of free monoid morphisms. (arXiv:1507.00206v1 [math.CO])

Motivated by applications in the theory of numeration systems and recognizable sets of integers, this paper deals with morphic words when erasing morphisms are taken into account. Cobham showed that if an infinite word $w =g(f^\omega(a))$ is the image of a fixed point of a morphism $f$ under another morphism $g$, then there exist a non-erasing morphism $\sigma$ and a coding $\tau$ such that $w =\tau(\sigma^\omega(b))$.

Based on the Perron theorem about asymptotic properties of powers of non-negative matrices, our main contribution is an in-depth study of the growth type of iterated morphisms when one replaces erasing morphisms with non-erasing ones. We also explicitly provide an algorithm computing $\sigma$ and $\tau$ from $f$ and $g$.

by <a href="http://arxiv.org/find/math/1/au:+Charlier_E/0/1/0/all/0/1">Emilie Charlier</a>, <a href="http://arxiv.org/find/math/1/au:+Leroy_J/0/1/0/all/0/1">Julien Leroy</a>, <a href="http://arxiv.org/find/math/1/au:+Rigo_M/0/1/0/all/0/1">Michel Rigo</a> at July 02, 2015 01:30 AM

Modeling and Analysis of Content Caching in Wireless Small Cell Networks. (arXiv:1507.00182v1 [cs.IT])

Network densification with small cell base stations is a promising solution to satisfy future data traffic demands. However, increasing small cell base station density alone does not ensure better users quality-of-experience and incurs high operational expenditures. Therefore, content caching on different network elements has been proposed as a mean of offloading he backhaul by caching strategic contents at the network edge, thereby reducing latency. In this paper, we investigate cache-enabled small cells in which we model and characterize the outage probability, defined as the probability of not satisfying users requests over a given coverage area. We analytically derive a closed form expression of the outage probability as a function of signal-to-interference ratio, cache size, small cell base station density and threshold distance. By assuming the distribution of base stations as a Poisson point process, we derive the probability of finding a specific content within a threshold distance and the optimal small cell base station density that achieves a given target cache hit probability. Furthermore, simulation results are performed to validate the analytical model.

by <a href="http://arxiv.org/find/cs/1/au:+Tamoor_ul_Hassan_S/0/1/0/all/0/1">Syed Tamoor-ul-Hassan</a>, <a href="http://arxiv.org/find/cs/1/au:+Bennis_M/0/1/0/all/0/1">Mehdi Bennis</a>, <a href="http://arxiv.org/find/cs/1/au:+Nardelli_P/0/1/0/all/0/1">Pedro H. J. Nardelli</a>, <a href="http://arxiv.org/find/cs/1/au:+Latva_Aho_M/0/1/0/all/0/1">Matti Latva-Aho</a> at July 02, 2015 01:30 AM

Forward and Backward Bisimulations for Chemical Reaction Networks. (arXiv:1507.00163v1 [cs.LO])

We present two quantitative behavioral equivalences over species of a chemical reaction network (CRN) with semantics based on ordinary differential equations. Forward CRN bisimulation identifies a partition where each equivalence class represents the exact sum of the concentrations of the species belonging to that class. Backward CRN bisimulation relates species that have the identical solutions at all time points when starting from the same initial conditions. Both notions can be checked using only CRN syntactical information, i.e., by inspection of the set of reactions. We provide a unified algorithm that computes the coarsest refinement up to our bisimulations in polynomial time. Further, we give algorithms to compute quotient CRNs induced by a bisimulation. As an application, we find significant reductions in a number of models of biological processes from the literature. In two cases we allow the analysis of benchmark models which would be otherwise intractable due to their memory requirements.

by <a href="http://arxiv.org/find/cs/1/au:+Cardelli_L/0/1/0/all/0/1">Luca Cardelli</a>, <a href="http://arxiv.org/find/cs/1/au:+Tribastone_M/0/1/0/all/0/1">Mirco Tribastone</a>, <a href="http://arxiv.org/find/cs/1/au:+Tschaikowski_M/0/1/0/all/0/1">Max Tschaikowski</a>, <a href="http://arxiv.org/find/cs/1/au:+Vandin_A/0/1/0/all/0/1">Andrea Vandin</a> at July 02, 2015 01:30 AM

Randomized Revenue Monotone Mechanisms for Online Advertising. (arXiv:1507.00130v1 [cs.GT])

Online advertising is the main source of revenue for many Internet firms. A central component of online advertising is the underlying mechanism that selects and prices the winning ads for a given ad slot. In this paper we study designing a mechanism for the Combinatorial Auction with Identical Items (CAII) in which we are interested in selling $k$ identical items to a group of bidders each demanding a certain number of items between $1$ and $k$. CAII generalizes important online advertising scenarios such as image-text and video-pod auctions [GK14]. In image-text auction we want to fill an advertising slot on a publisher's web page with either $k$ text-ads or a single image-ad and in video-pod auction we want to fill an advertising break of $k$ seconds with video-ads of possibly different durations.

Our goal is to design truthful mechanisms that satisfy Revenue Monotonicity (RM). RM is a natural constraint which states that the revenue of a mechanism should not decrease if the number of participants increases or if a participant increases her bid.

[GK14] showed that no deterministic RM mechanism can attain PoRM of less than $\ln(k)$ for CAII, i.e., no deterministic mechanism can attain more than $\frac{1}{\ln(k)}$ fraction of the maximum social welfare. [GK14] also design a mechanism with PoRM of $O(\ln^2(k))$ for CAII.

In this paper, we seek to overcome the impossibility result of [GK14] for deterministic mechanisms by using the power of randomization. We show that by using randomization, one can attain a constant PoRM. In particular, we design a randomized RM mechanism with PoRM of $3$ for CAII.

by <a href="http://arxiv.org/find/cs/1/au:+Goel_G/0/1/0/all/0/1">Gagan Goel</a>, <a href="http://arxiv.org/find/cs/1/au:+Hajiaghayi_M/0/1/0/all/0/1">MohammadTaghi Hajiaghayi</a>, <a href="http://arxiv.org/find/cs/1/au:+Khani_M/0/1/0/all/0/1">Mohammad Reza Khani</a> at July 02, 2015 01:30 AM

Secret Key Agreement with Large Antenna Arrays under the Pilot Contamination Attack. (arXiv:1507.00095v1 [cs.CR])

We present a secret key agreement (SKA) protocol for a multi-user time-division duplex system where a base-station (BS) with a large antenna array (LAA) shares secret keys with users in the presence of non-colluding eavesdroppers. In the system, when the BS transmits random sequences to legitimate users for sharing common randomness, the eavesdroppers can attempt the pilot contamination attack (PCA) in which each of eavesdroppers transmits its target user's training sequence in hopes of acquiring possible information leak by steering beam towards the eavesdropper. We show that there exists a crucial complementary relation between the received signal strengths at the eavesdropper and its target user. This relation tells us that the eavesdropper inevitably leaves a trace that enables us to devise a way of measuring the amount of information leakage to the eavesdropper even if PCA parameters are unknown. To this end, we derive an estimator for the channel gain from the BS to the eavesdropper and propose a rate-adaptation scheme for adjusting the length of secret key under the PCA. Extensive analysis and evaluations are carried out under various setups, which show that the proposed scheme adequately takes advantage of the LAA to establish the secret keys under the PCA.

by <a href="http://arxiv.org/find/cs/1/au:+Im_S/0/1/0/all/0/1">Sanghun Im</a>, <a href="http://arxiv.org/find/cs/1/au:+Jeon_H/0/1/0/all/0/1">Hyoungsuk Jeon</a>, <a href="http://arxiv.org/find/cs/1/au:+Choi_J/0/1/0/all/0/1">Jinho Choi</a>, <a href="http://arxiv.org/find/cs/1/au:+Ha_J/0/1/0/all/0/1">Jeongseok Ha</a> at July 02, 2015 01:30 AM

A Study of Gradient Descent Schemes for General-Sum Stochastic Games. (arXiv:1507.00093v1 [cs.LG])

Zero-sum stochastic games are easy to solve as they can be cast as simple Markov decision processes. This is however not the case with general-sum stochastic games. A fairly general optimization problem formulation is available for general-sum stochastic games by Filar and Vrieze [2004]. However, the optimization problem there has a non-linear objective and non-linear constraints with special structure. Since gradients of both the objective as well as constraints of this optimization problem are well defined, gradient based schemes seem to be a natural choice. We discuss a gradient scheme tuned for two-player stochastic games. We show in simulations that this scheme indeed converges to a Nash equilibrium, for a simple terrain exploration problem modelled as a general-sum stochastic game. However, it turns out that only global minima of the optimization problem correspond to Nash equilibria of the underlying general-sum stochastic game, while gradient schemes only guarantee convergence to local minima. We then provide important necessary conditions for gradient schemes to converge to Nash equilibria in general-sum stochastic games.

by <a href="http://arxiv.org/find/cs/1/au:+Prasad_H/0/1/0/all/0/1">H. L. Prasad</a>, <a href="http://arxiv.org/find/cs/1/au:+Bhatnagar_S/0/1/0/all/0/1">Shalabh Bhatnagar</a> at July 02, 2015 01:30 AM

Workload Trace Generation for Dynamic Environments in Cloud Computing. (arXiv:1507.00090v1 [cs.DC])

Cloud computing datacenters provide millions of virtual machines in actual cloud markets. In this context, Virtual Machine Placement (VMP) is one of the most challenging problems in cloud infrastructure management, considering the large number of possible optimization criteria and different formulations that could be studied. Considering the on-demand model of cloud computing, the VMP problem should be optimized dynamically to efficiently attend typical workload of modern applications. This work proposes several dynamic environments for solving the VMP from the providers' perspective based on the most relevant dynamic parameters studied so far in the VMP literature. A complete set of environments and workload traces examples are presented in this work.

by <a href="http://arxiv.org/find/cs/1/au:+Ortigoza_J/0/1/0/all/0/1">Jammily Ortigoza</a>, <a href="http://arxiv.org/find/cs/1/au:+Lopez_Pires_F/0/1/0/all/0/1">Fabio L&#xf3;pez-Pires</a>, <a href="http://arxiv.org/find/cs/1/au:+Baran_B/0/1/0/all/0/1">Benjam&#xed;n Bar&#xe1;n</a> at July 02, 2015 01:30 AM

Specifying Concurrent Problems: Beyond Linearizability. (arXiv:1507.00073v1 [cs.DC])

Tasks and objects are two predominant ways of specifying distributed problems. A task is specified by an input/output relation, defining for each set of processes that may run concurrently, and each assignment of inputs to the processes in the set, the valid outputs of the processes. An object is specified by an automaton describing the outputs the object may produce when it is accessed sequentially. Thus, tasks explicitly state what may happen only when sets of processes run concurrently, while objects only specify what happens when processes access the object sequentially. Each one requires its own implementation notion, to tell when an execution satisfies the specification. For objects linearizability is commonly used, a very elegant and useful consistency condition. For tasks implementation notions are less explored.

The paper introduces the notion of interval-sequential object. The corresponding implementation notion of interval-linearizability generalizes linearizability, and allows to associate states along the interval of execution of an operation. Interval-linearizability allows to specify any task, however, there are sequential one-shot objects that cannot be expressed as tasks, under the simplest interpretation of a task. It also shows that a natural extension of the notion of a task is expressive enough to specify any interval-sequential object.

by <a href="http://arxiv.org/find/cs/1/au:+Castaneda_A/0/1/0/all/0/1">Armando Castaneda</a>, <a href="http://arxiv.org/find/cs/1/au:+Raynal_M/0/1/0/all/0/1">Michel Raynal</a>, <a href="http://arxiv.org/find/cs/1/au:+Rajsbaum_S/0/1/0/all/0/1">Sergio Rajsbaum</a> at July 02, 2015 01:30 AM

Weak regularity and finitely forcible graph limits. (arXiv:1507.00067v1 [math.CO])

Graphons are analytic objects representing limits of convergent sequences of graphs. Lov\'asz and Szegedy conjectured that every finitely forcible graphon, i.e. any graphon determined by finitely many subgraph densities, has a simple structure. In particular, one of their conjectures would imply that every finitely forcible graphon has a weak $\varepsilon$-regular partition with the number of parts bounded by a polynomial in $\varepsilon^{-1}$. We construct a finitely forcible graphon $W$ such that the number of parts in any weak $\varepsilon$-regular partition of $W$ is at least exponential in $\varepsilon^{-2}/2^{5\log^*\varepsilon^{-2}}$. This bound almost matches the known upper bound for graphs and, in a certain sense, is the best possible for graphons.

by <a href="http://arxiv.org/find/math/1/au:+Cooper_J/0/1/0/all/0/1">Jacob W. Cooper</a>, <a href="http://arxiv.org/find/math/1/au:+Kaiser_T/0/1/0/all/0/1">Tom&#xe1;&#x161; Kaiser</a>, <a href="http://arxiv.org/find/math/1/au:+Kral_D/0/1/0/all/0/1">Daniel Kr&#xe1;&#x13e;</a>, <a href="http://arxiv.org/find/math/1/au:+Noel_J/0/1/0/all/0/1">Jonathan A. Noel</a> at July 02, 2015 01:30 AM

Towards Interactive Logic Programming. (arXiv:1211.6535v2 [cs.LO] UPDATED)

Linear logic programming uses provability as the basis for computation. In the operational semantics based on provability, executing the additive-conjunctive goal $G_1 \& G_2$ from a program $P$ simply terminates with a success if both $G_1$ and $G_2$ are solvable from $P$. This is an unsatisfactory situation, as a central action of \& -- the action of choosing either $G_1$ or $G_2$ by the user -- is missing in this semantics.

We propose to modify the operational semantics above to allow for more active participation from the user. We illustrate our idea via muProlog, an extension of Prolog with additive goals.

by <a href="http://arxiv.org/find/cs/1/au:+Kwon_K/0/1/0/all/0/1">Keehang Kwon</a>, <a href="http://arxiv.org/find/cs/1/au:+Park_M/0/1/0/all/0/1">Mi-Young Park</a> at July 02, 2015 01:30 AM

CompsciOverflow

DTM that halts on a minimum one input

I am trying to show that this is Turing Recognizable.

The language A, that is a DTM, T, that halts on a minimum of one input.

My justification is that of course it is because, you just need to check if an accept state is reachable. However, this seems like I am making it trivial.

Is there something larger I am missing?

EDIT: New Idea

Let M be a DTM that does this:

For every string w in the language, simulate w on T. if it halts, accept.

The only problem I see with this, is if a string causes it to loop forever, then it will never try the others.

by csonq at July 02, 2015 01:29 AM

StackOverflow

Delete directory recursively in Scala

I am writing the following (with Scala 2.10 and Java 6):

import java.io._

def delete(file: File) {
  if (file.isDirectory) 
    Option(file.listFiles).map(_.toList).getOrElse(Nil).foreach(delete(_))
  file.delete
}

How would you improve it ? The code seems working but it ignores the return value of java.io.File.delete. Can it be done easier with scala.io instead of java.io ?

by Michael at July 02, 2015 01:10 AM

Dealing with a Failed `Future`

Given the following two methods:

def f: Future[Int] = Future { 10 }
def g: Future[Int] = Future { 5 }

I'd like to compose them:

scala> import scala.concurrent.Future
import scala.concurrent.Future

scala> import scala.concurrent.Future._
import scala.concurrent.Future._

scala> import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.ExecutionContext.Implicits.global

scala> for { 
     |   a <- f
     |   b <- g
     | } yield (a+b)
res2: scala.concurrent.Future[Int] = scala.concurrent.impl.Promise$DefaultPromise@34f5090e

Now, I'll call Await.result to block until it's finished.

scala> import scala.concurrent.duration._
import scala.concurrent.duration._

As expected, I get 15, since Await.result took a Future[Int] and returned an Int.

scala> Await.result(res2, 5.seconds)
res6: Int = 15

Defining a recoverFn for a failed Future:

scala> val recoverFn: PartialFunction[Throwable, Future[Int]] = 
    { case _ => Future{0} }
recoverFn: PartialFunction[Throwable,scala.concurrent.Future[Int]] = <function1>

I try to define a failedFuture:

scala> def failedFuture: Future[Int] = Future { 666 }.failed.recoverWith{ recoverFn }
<console>:20: error: type mismatch;
 found   : scala.concurrent.Future[Any]
 required: scala.concurrent.Future[Int]
       def failedFuture: Future[Int] = Future { 666 }.failed.recoverWith{ recoverFn }
                                                                        ^

But, I get the above compile-time error.

Specifically, how can I fix this error? Generally, is Future#recoverWith typically how failed Future's are handled?

by Kevin Meredith at July 02, 2015 01:04 AM

Java 8 Lambda expressions for solving fibonacci (non recursive way)

I am a beginner in using Lambda expression feature in Java 8. Lambda expressions are pretty well useful in solving programs like Prime number check, factorial etc.

However can they be utilized effectively in solving problems like Fibonacci where the current value depends on sum of previous two values. I have pretty well solved prime number check problem effectively using Lambda expressions. The code for the same is given below.

boolean checkPrime=n>1 && LongStream.range(2, (long) Math.sqrt(n)).parallel().noneMatch(e->(n)%e==0);

In the above code in the noneMatch method we are evaluating with the current value(e) in the range. But for the Fibonacci problem, we requires previous two values.

How can we make it happen?

by Kiran Muralee at July 02, 2015 12:49 AM

/r/compilers

BASIC interpreter with Lisp?

I'm trying to find an interpreter for a simple version of BASIC in Lisp (Scheme, Common Lisp, Clojure - or some variant?) because I've implemented a version of Tiny Basic before and I was curious how it looks in Lisp. Also, it would be neat to have a Tiny Basic that could also point to Lisp structures.

submitted by SmokingChrome
[link] [comment]

July 02, 2015 12:46 AM

Planet Theory

Private Approximations of the 2nd-Moment Matrix Using Existing Techniques in Linear Regression

Authors: Or Sheffet
Download: PDF
Abstract: We introduce three differentially-private algorithms that approximates the 2nd-moment matrix of the data. These algorithm, which in contrast to existing algorithms output positive-definite matrices, correspond to existing techniques in linear regression literature. Specifically, we discuss the following three techniques. (i) For Ridge Regression, we propose setting the regularization coefficient so that by approximating the solution using Johnson-Lindenstrauss transform we preserve privacy. (ii) We show that adding a small batch of random samples to our data preserves differential privacy. (iii) We show that sampling the 2nd-moment matrix from a Bayesian posterior inverse-Wishart distribution is differentially private provided the prior is set correctly. We also evaluate our techniques experimentally and compare them to the existing "Analyze Gauss" algorithm of Dwork et al.

July 02, 2015 12:41 AM

An exponential lower bound for homogeneous depth-5 circuits over finite fields

Authors: Mrinal Kumar, Ramprasad Saptharishi
Download: PDF
Abstract: In this paper, we show exponential lower bounds for the class of homogeneous depth-$5$ circuits over all small finite fields. More formally, we show that there is an explicit family $\{P_d : d \in \mathbb{N}\}$ of polynomials in $\mathsf{VNP}$, where $P_d$ is of degree $d$ in $n = d^{O(1)}$ variables, such that over all finite fields $\mathbb{F}_q$, any homogeneous depth-$5$ circuit which computes $P_d$ must have size at least $\exp(\Omega_q(\sqrt{d}))$.

To the best of our knowledge, this is the first super-polynomial lower bound for this class for any field $\mathbb{F}_q \neq \mathbb{F}_2$.

Our proof builds up on the ideas developed on the way to proving lower bounds for homogeneous depth-$4$ circuits [GKKS13, FLMS13, KLSS14, KS14] and for non-homogeneous depth-$3$ circuits over finite fields [GK98, GR00]. Our key insight is to look at the space of shifted partial derivatives of a polynomial as a space of functions from $\mathbb{F}_q^n \rightarrow \mathbb{F}_q$ as opposed to looking at them as a space of formal polynomials and builds over a tighter analysis of the lower bound of Kumar and Saraf [KS14].

July 02, 2015 12:41 AM

On the minimum dimension of a Hilbert space needed to generate a quantum correlation

Authors: Jamie Sikora, Antonios Varvitsiotis, Zhaohui Wei
Download: PDF
Abstract: Consider a two-party correlation that can be generated by performing local measurements on a bipartite quantum system. A question of fundamental importance is to understand how many resources, which we quantify by the dimension of the underlying quantum system, are needed to reproduce this correlation. In this paper, we identify an easy-to-compute lower bound on the smallest Hilbert space dimension needed to generate an arbitrary two-party quantum correlation. To derive the lower bound, we combine a new geometric characterization for the set of quantum correlations (arXiv:1506.07297) with techniques that were recently used to lower bound the PSD-rank of a nonnegative matrix, an important notion to mathematical optimization and quantum communication theory (arXiv:1407.4308). We show that our bound is tight on the correlations generated by optimal quantum strategies for the CHSH and the Magic Square Game and also reprove that a family of PR-boxes cannot be realized using quantum strategies.

July 02, 2015 12:41 AM

On the Communication Complexity of Distributed Clustering

Authors: Qin Zhang
Download: PDF
Abstract: In this paper we give a first set of communication lower bounds for distributed clustering problems, in particular, for k-center, k-median and k-means. When the input is distributed across a large number of machines and the number of clusters k is small, our lower bounds match the current best upper bounds up to a logarithmic factor. We have designed a new composition framework in our proofs for multiparty number-in-hand communication complexity which may be of independent interest.

July 02, 2015 12:40 AM

StackOverflow

Processing JSON data in scala template [on hold]

I an new to Play Framework. I am using java API for controller actions, SCALA for view part.I want to send data in JSON format from controller and render it in view using SCALA. I am not proficient in SCALA .can anyone help me over it?

by Surya Chaitanya at July 02, 2015 12:39 AM

Halfbakery

HN Daily

July 01, 2015

StackOverflow

Finding cliques in an undirected graph with computations using Apache-Spark

I know how to find cliques in an undirected graph and this can be done quite easily using R programming language.

Since, I really want to increase the speed and wish to do it for a large graph, I want to perform the whole task using Apache-Spark. On researching quite a bit, I found out that SparkR was released some days back for extending support to R in Spark. But, since it's fairly new right now, the libraries supported are all limited. I tried using the igraph package available in R with Spark but couldn't.

Therefore, I wish to ask those of you who are familiar with Apache-Spark, which language (Python/Java/Scala/etc) should I use for the above task of finding cliques in an undirected graph so that it can be accomplished easily without much hassle.

Thanks!

by John Lui at July 01, 2015 11:43 PM

TheoryOverflow

What happens to complexity classes if all $\#P$ problems have polynomial-time algorithms?

As title says what happens to other complexity classes if all $\#P$ (Sharp-P) problems have polynomial-time algorithms? What happens to PSPACE?

by mute at July 01, 2015 11:31 PM

CompsciOverflow

How to convert this type of languages to Context Free grammar?

As I've already asked my Question about the solving Context Free Grammar $L = \{a^n b^m c^p \mid n = m + p + 2\}$

Can this language be defined by a Context Free Grammar?

Now i have just changed n = m + p - 2 and still can't figure out, Here is my attempt:

S -> Cc
B -> aBb | ^
C -> aCc | Bb

(In this Grammer i can't handle string like bb , cc , abbc , abcc etc)

How to make such type of grammars where n = m + p - x .Can anyone explain how to solve such kind of grammars:

$K = \{a^n b^m c^p \mid n = m + p - 2\}$

by Zulqurnain jutt at July 01, 2015 11:15 PM

Lobsters

CompsciOverflow

How much bigger can an LR(1) automaton for a language be than the corresponding LR(0) automaton?

In an LR(0) parser, each state consists of a collection of LR(0) items, which are productions annotated with a position. In an LR(1) parser, each state consists of a collection of LR(1) items, which are productions annotated with a position and a lookahead character.

It's known that given a state in an LR(1) automaton, the configurating set formed by dropping the lookahead tokens from each LR(1) item yields a configurating set corresponding to some state in the LR(0) automaton. In that sense, the main difference between an LR(1) automaton and an LR(0) automaton is that the LR(1) automaton has more copies of the states in the LR(0) automaton, each of which is annotated with lookahead information. For this reason, LR(1) automata for a given CFG are typically larger than the corresponding LR(0) parser for that CFG.

My question is how much larger the LR(1) automaton can be. If there are $n$ distinct terminal symbols in the alphabet of the grammar, then in principle we might need to replicate each state in the LR(0) automaton at least once per subset of those $n$ distinct terminal symbols, potentially leading to an LR(1) automaton that's $2^n$ times larger than the original LR(0) automaton. Given that each individual item in the LR(0) automaton consists of a set of different LR(0) items, we may get an even larger blowup.

That said, I can't seem to find a way to construct a family of grammars for which the LR(1) automaton is significantly larger than the corresponding LR(0) automaton. Everything I've tried has led to a modest increase in size (usually around 2-4x), but I can't seem to find a pattern that leads to a large blowup.

Are there known families of context-free grammars whose LR(1) automata are exponentially larger than the corresponding LR(0) automata? Or is it known that in the worst case, you can't actually get an exponential blowup?

Thanks!

by templatetypedef at July 01, 2015 10:51 PM

Lobsters

/r/emacs

StackOverflow

Using void-type for returning this-reference

I'm asking you to consider Java language idea.

Very often programmers call several void-functions each one in a new operator:

final MyPanel mp = new MyPanel();
mp.setName( "abc" );
mp.setScale( scrollH );
mp.setMinValue( 1 );
mp.setMaxValue( 1000 );
mp.setStep( 10 );
panel.add( mp );

The only way to write call list in single line is to use return this; operator at the end of each funtion. Idea is to insert return this; operator at the end of each function and replace VOID type with current CLASS name at the byte code. Thus, call of void-function will return reference to the object on which it was called. Result:

panel.add( new MyClass().setName( "abc" ).setScale( scrollH
).setMinValue( 1 ).setMaxValue( 1000 ).setStep( 10 ) );

Java language is becoming more functional. This feature can be helpful in that process. Programms that uses it will become dramatically shorter but of course wider.

// OK
obj.static_void_f1().nonstatic_void_f1();
// OK
obj.nonstatic_void_f1().static_void_f1().nonstatic_void_f3();
// ERROR: "non-static method nonstatic_f1() cannot be referenced from a
static context"
MyClass.static_void_f1().nonstatic_void_f1();

Subclasses. If VOID funcion is overriden in subclass, then "return this" will return SUBCLASS reference.

What do you think?

by Dmitry at July 01, 2015 10:31 PM

CompsciOverflow

Declarative and Procedural problem solutions [on hold]

Program solutions are structured and desigend based on the type of problem at hand. While looking up on when to apply OOD principles to a solution, I came across an article which states...

  • Declarative Plan should guide the design of a declarative problems, whereas Procedural Plan should guide the design of a Procedural Problem.
  • Declarative Plan would result in a Hierarchical solution which is Vertical Communication between objects, whereas Procedural Plan would result in a Flat solution which is Horizontal Communication between objects.

I want to know the folloing:

  1. What does a Declarative Plan mean?
  2. What does a Procedural Plan mean?
  3. What problems are considered as Declarative problems?
  4. What problems are considered as Procedural problems?
  5. What is vertical communication between objects?
  6. What is horizontal communication between objects?

by user793468 at July 01, 2015 10:30 PM

StackOverflow

Using fold on Option without having x => x

Given:

val personsOpt:Option[List[Person]] = ???

I prefer:

persons = personsOpt.fold(List[Person]()){person => person}

To this:

persons = personsOpt.getOrElse(List[Person]())

For type safety reasons. For example this does not compile:

persons = personsOpt.fold(Nil){person => person}

Is there a simple way to get the type safety but not have {person => person}?

EDIT: Two things now concretely understood:

  1. There is nothing un-type-safe about getOrElse. For instance this does not compile: personsOpt.getOrElse("")
  2. Nil is List() and if its type can't be inferred the compiler will ask you to be explicit. So there can be no type issues with using Nil

I couldn't find the link just now, but I did (incorrectly) read that getOrElse was somehow less type safe than using fold with an Option.

by Chris Murphy at July 01, 2015 10:28 PM

/r/compsci

QuantOverflow

Garch model newbie. help needed with variables

I am very new to using stata and very new to using Garch models. I am currently doing my final dissertation for my MSc in Finance studies and regarding my topic I understood that i had to use garch to find answers to my questions. So, I am trying to find if natural disaster events have any effect on a daily data of composite index (time series variable). I have my two data sets. one of the composite index and I have created a few dummy variables, each one for the date that every event took place (within my time range). Moreover, I want to include one control variable that affects the dependent variable (composite index) and I want to include it to remove any noise on my examination.

So, my questions are: 1) how to include these variables, meaning where should i put the control variables and where my independents ( I am using the menu of ARCH/GARCH testing, I'm not writing code) 2) I would like to examine the persistence of each event (each dummy variable) on my dependent variable. meaning that I want to check if the shock is still affecting the dependent variable up to 5 days after of its occurrence. how can I do that? give my dummy variable the value of 1 not only on the day of the occurrence of the event but on the 5 following days as well? 3) finally, do you think that it is correct to create one dummy variable for each catastrophic event or one dummy variable for each type of shock (i.e. 1 for floods, 1 for storms, 1 for earthquakes etc..)

Thanks a lot in advance, Evangelos

by evangelos at July 01, 2015 10:17 PM

/r/emacs

StackOverflow

Checked Exception Variance

Java supports Checked Exceptions, but is invariant by default at the declaration site. Scala allows variance annotations using +T and -T, but does not have Checked Exceptions. I am currently designing / implementing a language that is supposed to support both, so I am wondering how variance works for checked exceptions.

Example:

interface Function[-P1, +R, E]
{
    public R apply(P1 par1) throws E
}

What kind of variance annotation should E have, or is it invariant? And further, should I generate an error if it has the opposite variance annotation, similar to how in Scala you get an error if you use a covariant type argument as a function parameter type?

by Clashsoft at July 01, 2015 10:09 PM

CompsciOverflow

Running time analysis of a segment tree

Can someone provide an analysis of the update and query operations of a segment tree?

I thought of a way which goes like this - At every node, we make at most two recursive calls on the left and right sub-trees. If we could prove that one of these calls terminates fairly quickly, the time complexity would be logarithmically bounded. But how do we prove this?

by adijo at July 01, 2015 10:09 PM

StackOverflow

Is there any type system which can assign a type to any halting lambda calculus term?

Some lambda terms, such as the church number 3: (f x -> (f (f (f x)))), are easily typeable on the simply typed lambda calculus. Others, such as pred, (a b c d e f -> (d (g -> (t -> (t (g e)))) (g -> f) (g -> g))), are known to require more complex systems such as System-F. Some of them aren't typeable on System-F itself, as this zip I posted in another question. As András Kovács observed on the follow-up, that function can be typed in Agda, although the technique he used for doing so isn't trivial.

I have a lot of similarly tricky terms such as nTuple = (k -> (k (b c d -> (b (e -> (c e d)))) (b -> b) (b -> b))), which receives a church number and returns an N-Tuple of that length. I don't know if those terms are typeable. I would like to know:

  1. Some systems seems to ban many "desirable" terms, while others bans less of those. What is the property of a system that determines if it is more accepting or more restricting?

  2. Is there any type system in which arbitrary lambda terms are typeable?

  3. Is there any general technique/algorithm to find the types of those expressions in such system?

by Viclib at July 01, 2015 10:05 PM

Planet Emacsen

(or emacs: Power up your locate command

locate

I'm sure many people know that Emacs comes with a locate command. This command, if you're on a Linux system, will find all files on your system that match a particular pattern. The advantage of locate over find when searching the whole system, is that it is much faster, since it uses a pre-computed database. This database is updated periodically, you can force an update with:

sudo updatedb

Of course find is faster if you need to search only a specific directory instead of the whole system, but sometimes you just don't know that directory.

counsel-locate

Dynamic

The way locate works it that it asks you for a query, which is glob-based instead of regex-based, and then prints the results to a static buffer.

On the other hand, counsel-locate is dynamic: each time you input a new character a new locate query is ran, and the old one is terminated. On my system, it takes around 2 seconds for a query to complete, so it requires a bit of patience.

Regex-based

I like regex way more than globs for some reason. Here's the command called for the input mp3$:

locate -i --regex mp3$

Of course, the standard ivy-mode method is used to build the regex from a list of space separated words. So the input fleet mp3$ will result in:

locate -i --regex \\(fleet\\).*?\\(mp3$\\)

You could go your own way and update the regex matcher to be ivy--regex-fuzzy, which results in:

locate -i --regex f.*l.*e.*e.*t.* .*m.*p.*3$

But I think less matches is usually better than more matches.

Multi-exit

This is just the coolest feature. Basically, for each file you locate, you can easily:

  • Open it in Emacs (default).
  • Open it with xdg-open, so that PDF files are forwarded to evince and MP3 files are forwarded to rhythmbox etc.
  • Open in in dired.

Here's an example of how to do it. First I call counsel-locate, which I like to bind to C-x l. Then I enter emacs pdf$ and wait around 2 seconds for the 248 results to come up. Then I scroll to the 18th result and press C-o to open up the hydra-based option panel:

counsel-locate-1.png

The last column (Action) is newer than others. As you can see, it has 3 exit points, which I can scroll with w and s. And currently I'm on the default exit point, which would open the file in Emacs.

For the next screenshot:

  • I pressed s twice to change the action to dired.
  • I pressed c to make the current action execute each time a new candidate is selected.

counsel-locate-2.png

So now, I could just scroll through all my directories on my system that contain PDF files related to Emacs by just holding j.

A similar thing can be done for music tracks:

  • C-x l dire mp3$ to get the list of all Dire Straits tracks on my system.
  • C-o s to switch the action from "open in Emacs" to "xdg-open".
  • From here, I could open one track after another by pressing C-M-n repeatedly. Or I can press c and then j repeatedly.

I've been experimenting with opening EPS and PDF files in quick succession. It's still a work in progress, since I need to use a special wmctrl script to prevent the Emacs window from losing focus each time a new Evince instance is opened.

Outro

You can check out the new feature by installing counsel from MELPA. It will automatically fetch ivy-mode as well. When you enable ivy-mode, besides doing all your completion, it will also remap switch-to-buffer to ivy-switch-buffer. That command also has a multi-exit: pressing C-o sd instead of C-m will kill the selected buffer instead of switching to it. It's a very minor optimization: instead of C-m C-x k you press C-o sd, however you could e.g. use C-o scjjjj to kill five buffers at once.

While the idea of multi-exits is powerful, it's hard to find places to use it efficiently. I think counsel-locate is a nice place for it, although it could work without it:

  • Find file in Emacs with C-m in the completion interface.
  • Call dired-jump with C-x C-j.
  • Type ! and xdg-open RET.
  • Select and kill the unneeded file with C-x k.

I hope you see now why I prefer C-o sd.

by (or emacs at July 01, 2015 10:00 PM

StackOverflow

Declarative and Procedural problems

Program solutions are structured and desigend based on the type of problem at hand. While looking up on when to apply OOD principles to a solution, I came across an article which states...

  • Declarative Plan should guide the design of a declarative problems, whereas Procedural Plan should guide the design of a Procedural Problem.
  • Declarative Plan would result in a Hierarchical solution which is Vertical Communication between objects, whereas Procedural Plan would result in a Flat solution which is Horizontal Communication between objects.

I want to know the folloing:

  1. What does a Declarative Plan mean?
  2. What does a Procedural Plan mean?
  3. What problems are considered as Declarative problems?
  4. What problems are considered as Procedural problems?
  5. What is vertical communication between objects?
  6. What is horizontal communication between objects?

by user793468 at July 01, 2015 09:59 PM

/r/netsec

Dave Winer

How to fix my iMac internal fusion drive?

  1. I have a 5K Retina iMac. Its internal drive is an SSD.

  2. It's not working. Let me explain.

  3. I booted off an external drive.

  4. When I look at the drive in the Disk Utility, there's no option to erase it.

  5. When I look at the Partition tab, everything is grayed out.

  6. When I click on Verify Disk, it says everything is good.

  7. When I repair the disk it says it's all good.

  8. The disk does not show up on the desktop.

  9. Software cannot see it.

  10. I want to format the drive. How?

The fix is in!

Update: Problem solved in the Facebook thread.

Bonus: A video of me fixing the problem, in case you have the same thing wrong with your Mac fusion drive.

July 01, 2015 09:40 PM

Lobsters

StackOverflow

splitting contents of a dataframe column using Spark 1.4 for nested json data

I am having issues with splitting contents of a dataframe column using Spark 1.4. The dataframe was created by reading a nested complex json file. I used df.explode but keep getting error message. The json file format is as follows:

[   
    {   
        "neid":{  }, 
        "mi":{   
            "mts":"20100609071500Z", 
            "gp":"900", 
            "tMOID":"Aal2Ap", 
            "mt":[  ], 
            "mv":[   
                {   
                    "moid":"ManagedElement=1,TransportNetwork=1,Aal2Sp=1,Aal2Ap=r1552q", 
                    "r": 
                    [ 
                     1, 
                     2, 
                     5 
                     ] 
                }, 
                { 
                    "moid":"ManagedElement=1,TransportNetwork=1,Aal2Sp=1,Aal2Ap=r1542q", 
                    "r": 
                    [ 
                     1, 
                     2, 
                     5 
                     ] 
 } 
            ] 
        } 
    }, 
    {   
        "neid":{   
            "neun":"RC003", 
            "nedn":"SubNetwork=ONRM_RootMo_R,SubNetwork=RC003,MeContext=RC003", 
            "nesw":"CP90831_R9YC/11" 
        }, 
        "mi":{   
            "mts":"20100609071500Z", 
            "gp":"900", 
            "tMOID":"PlugInUnit", 
            "mt":"pmProcessorLoad", 
            "mv":[   
                {   
                    "moid":"ManagedElement=1,Equipment=1,Subrack=MS,Slot=6,PlugInUnit=1", 
                   "r": 
                     [ 1, 2, 5 
                     ] 
                }, 
                {   
                    "moid":"ManagedElement=1,Equipment=1,Subrack=ES-1,Slot=1,PlugInUnit=1", 
                   "r": 
                  [ 1, 2, 5 
                     ] 
                } 
            ] 
        } 
    } 
]

I used following code to load in Spark 1.4

scala> val df = sqlContext.read.json("/Users/xx/target/statsfile.json") 

scala> df.show() 
+--------------------+--------------------+ 
|                  mi|                neid| 
+--------------------+--------------------+ 
|[900,["pmEs","pmS...|[SubNetwork=ONRM_...| 
|[900,["pmIcmpInEr...|[SubNetwork=ONRM_...| 
|[900,pmUnsuccessf...|[SubNetwork=ONRM_...| 
|[900,["pmBwErrBlo...|[SubNetwork=ONRM_...| 
|[900,["pmSctpStat...|[SubNetwork=ONRM_...| 
|[900,["pmLinkInSe...|[SubNetwork=ONRM_...| 
|[900,["pmGrFc","p...|[SubNetwork=ONRM_...| 
|[900,["pmReceived...|[SubNetwork=ONRM_...| 
|[900,["pmIvIma","...|[SubNetwork=ONRM_...| 
|[900,["pmEs","pmS...|[SubNetwork=ONRM_...| 
|[900,["pmEs","pmS...|[SubNetwork=ONRM_...| 
|[900,["pmExisOrig...|[SubNetwork=ONRM_...| 
|[900,["pmHDelayVa...|[SubNetwork=ONRM_...| 
|[900,["pmReceived...|[SubNetwork=ONRM_...| 
|[900,["pmReceived...|[SubNetwork=ONRM_...| 
|[900,["pmAverageR...|[SubNetwork=ONRM_...| 
|[900,["pmDchFrame...|[SubNetwork=ONRM_...| 
|[900,["pmReceived...|[SubNetwork=ONRM_...| 
|[900,["pmNegative...|[SubNetwork=ONRM_...| 
|[900,["pmUsedTbsQ...|[SubNetwork=ONRM_...| 
+--------------------+--------------------+ 
scala> df.printSchema() 
root 
 |-- mi: struct (nullable = true) 
 |    |-- gp: long (nullable = true) 
 |    |-- mt: string (nullable = true) 
 |    |-- mts: string (nullable = true) 
 |    |-- mv: string (nullable = true) 
 |-- neid: struct (nullable = true) 
 |    |-- nedn: string (nullable = true) 
 |    |-- nesw: string (nullable = true) 
 |    |-- neun: string (nullable = true) 

scala> val df1=df.select("mi.mv").show() 
+--------------------+ 
|                  mv| 
+--------------------+ 
|[{"r":[0,0,0],"mo...| 
|{"r":[0,4,0,4],"m...| 
|{"r":5,"moid":"Ma...| 
|[{"r":[2147483647...| 
|{"r":[225,1112986...| 
|[{"r":[83250,0,0,...| 
|[{"r":[1,2,529982...| 
|[{"r":[26998564,0...| 
|[{"r":[0,0,0,0,0,...| 
|[{"r":[0,0,0],"mo...| 
|[{"r":[0,0,0],"mo...| 
|{"r":[0,0,0,0,0,0...| 
|{"r":[0,0,1],"moi...| 
|{"r":[4587,4587],...| 
|[{"r":[180,180],"...| 
|[{"r":["0,0,0,0,0...| 
|{"r":[0,35101,0,0...| 
|[{"r":["0,0,0,0,0...| 
|[{"r":[0,1558],"m...| 
|[{"r":["7484,4870...| 
+--------------------+ 

scala> df1.explode("mv","mvnew")(mv: String => mv.split(",")) 
<console>:1: error: ')' expected but '(' found. 
       df1.explode("mv","mvnew")(mv: String => mv.split(",")) 
                                                       ^ 
<console>:1: error: ';' expected but ')' found. 
       df1.explode("mv","mvnew")(mv: String => mv.split(",")) 

Am i doing something wrong? I need to extract data under mi.mv in separate columns so i can apply some transformations.

by Ak040 at July 01, 2015 09:34 PM

CompsciOverflow

Learning the principles of Petri nets [on hold]

While studying concurrency in one of my programming modules I came across the concept of Petri nets and I am keen on learning more in debt about the subject. What I am looking for is a serious place to start from and I came across this website that has some suggestions about literature: http://www.informatik.uni-hamburg.de/TGI/PetriNets/introductions/

Is "Understanding Petri Nets: Modeling Techniques, Analysis Methods, Case Studies" by Reisig a good introduction resource? If not what alternatives are there?

I know this is not exactly a question by the standarts of stackexchange but would be thankful for any suggestions and advise.

by PetarMI at July 01, 2015 09:12 PM

Planet Clojure

[Video 222] Jeanine Adkisson: Design and Prototype a Language In Clojure

How does a programming language work? One way to find out is to implement a new one. In this talk, Jeanine Adkisson describes how programming languages work — and then she shows how some of the basic parts could be implemented in Clojure.

The post [Video 222] Jeanine Adkisson: Design and Prototype a Language In Clojure appeared first on Daily Tech Video.

by reuven at July 01, 2015 09:02 PM

QuantOverflow

Best simplified way to model volatility in returns of an investment in a risky fixed income asset

I am currently working on a project where I have analyzed a certain category of fixd income instruments, and I now have the gross aggregate yield as well as the theoretical gross-aggregate default-free yield. Taking the difference of these two yields leaves me with what some people call the "loss rate" of the investment. My question is that if I wanted to model these investments as volatile using that information, what is the best way to do it/best distribution? I realize that the probability of default already factors into the risky yield, but I want something a bit better than that. For example, if I could use the mean yield and the loss rate to construct some sort of simple distribution, I would ideally be able to calculate the variance and therefore be able to give some sort of risk-adjusted performance measure.

It does not have to be fancy at all, and I can assume that all defaults are uncorrelated. What should I use? The exponential distribution? Or a gamma distribution? I think the gamma distribution or beta distribution might be best but I can't quite remember how to translate the mean return and loss rate to the parameters of those distributions. Any help would be greatly appreciated.

In other words, I have modeled that actual yield from my investments as follows:

$$Y = Y_{df}-Y_L$$

where $Y_{df}$ is the default-free yield, and $Y_L$ is the loss in yield due to defaults. So the question really amounts to modeling the distribution off losses, i.e. the distribution of $Y_L$.

EDIT: For example, if I assume that the probability of default and the loss given default were independent, I could write the expected loss in yield as the (probability of default)*(loss in yield due to default). Then if I further assume defaults are uncorrelated (an assumtion which I hope to remove but will use until I can get the bare bones model in place), I think I could model the expected losses as some sort of decay process. In this case I think I could simply use an exponential distribution for the number of defaults, which I think I can calibrate if I know the probability of default and the expected loss rate. But I am not sure if this is the best model or which model would allow me to get around those simplifying assumptions, particularly the one where the loss given default is independent of the probability of default.

by Paul at July 01, 2015 09:01 PM

Fefe

Wisst ihr, wen wir in der Griechenlandfrage noch nicht ...

Wisst ihr, wen wir in der Griechenlandfrage noch nicht gehört haben?

Die Nato!

Wie, die Nato hat zur Griechenlandfrage was zu sagen? Na klar!

Nato-Generalsekretär Stoltenberg drängt die Griechen, trotz der Krise die vergleichsweise hohen Verteidigungsausgaben nicht zu senken
Wie, watt? Militärausgaben? Nato?! Ja, wirklich! Der Nato-Generalsekretär hat angesagt, wenn Griechenland ihre (überdurchschnittlich hohen gemessen am BIP) Militärausgaben jetzt aus Pleitegründen senkt, dann könnte der Nato möglicherweise der Russlandfeldzug platzen, und das wäre ja entsetzlich!

Das muss man sich mal auf der Zunge zergehen lassen!

Unfassbar, diese Nato.

July 01, 2015 09:01 PM

StackOverflow

ReactiveMongo + Play 2: stream a sub document (array)

In the following example, I am using:

  • MongoDB (> 3.0 with WiredTiger engine)
  • Play framework 2.3.8 in Scala
  • The org.reactivemongo:play2-reactivemongo:0.11.0.play23-M3

First of all, suppose we have the following document in a MongoDB collection called for instance "urlColl":

{
  "_id" : ObjectId("5593bebe89645672000deec4"),
  "url" : "urlPath",
  "content" : [
      "a",
      "ab",
      "abc",
      "abcd"
  ]}

In Play, it is possible to stream new documents to the client side as soon as they are inserted in the "urlColl" collection, with the following method in Play:

def feed = Action {
  val cursor = collection
    .find(BSONDocument(), BSONDocument("content" -> 1))
    .options(QueryOpts().tailable.awaitData)
    .cursor[List[Try[BSONValue]]](ReadPreference.nearest)

  val dataProducer = cursor.enumerate(Int.MaxValue).map(_.toString)
  Ok.chunked(dataProducer &> EventSource()).as("text/event-stream")
}

and the implicit reader:

implicit object ContentToList extends BSONDocumentReader[List[Try[BSONValue]]] {
    def read(doc: BSONDocument): List[Try[BSONValue]] = {
      doc.getAs[BSONArray]("content").get.stream.toList
    }
}

Then, each time a new document with a "content" array is inserted, it is automatically sent to the client.

My question is quite simple:

Is it possible to stream to the client the new data injected in the "content" sub array?

Indeed, since there is no new document inserted (a new value is inserted in the array of the existing document), nothing is detected in the cursor and a fortiori nothing is sent to the client.

Thank you in advance!

by user3439701 at July 01, 2015 08:58 PM

Java implementation of bubble sort in functional style

How would you implement the following Bubble Sort algorithm in a functional (Java 8) way?

public static final <T extends Comparable<T>> List<T> imperativeBubbleSort(List<T> list) {
    int len = list == null? 0: list.size();
    for(int j = len-1; j > 0; j--) {
        for(int k = 0; k < j; k++) {
            if(list.get(k+1).compareTo(list.get(k)) < 0) {
                list.add(k, list.remove(k+1));
            }
        }
    }
    return list;
}

by romerorsp at July 01, 2015 08:53 PM

Planet Clojure

[GSoC 2015] Skummet becomes faster, gets a twin brother

Google Summer of Code 2015, reporting in. Everything goes well so far, tests are working. But testing takes quite a lot of time, especially if you need to test with different Android versions. Robolectric and Clojure startup times combined take their toll when you have to relaunch them several times. Can we do better? — I asked myself. Perhaps we can.

So I have an idea to adapt Skummet instead of Clojure for running tests. This is theoretically possible, but requires some work, so I decided to devote a few days to hacking Skummet. And to tell the truth I was quite pleased with the results.

Meet Pumpet

Pumpet (Norwegian: pumped) is a swole brother of Skummet. He is like Skummet on steroids. While Skummet hits the treadmill, Pumpet lifts iron in the basement. He is also very athletic and fast, he never misses his legs day and squats like there is no tomorrow, but he can't outrun his bro Skummet (average startup time of hello-world with Pumpet is 350 ms). But boy oh boy can he lift. And by lifting I mean the dynamic compilation.

So the main difference between Skummet and Pumpet is that Pumpet can eval. Pumpet completely (I hope) supports dynamic compilation even though it is itself compiled in the lean mode. To be able to do that Pumpet needs to have some extra body weight, but trust me, it's all muscle.

For example, Skummet doesn't emit macros during compilation because after the compilation is done they are useless (if you don't plan to do eval). Pumpet on the other hand can't skip macros, he must keep all the carbs and protein to be able to lift (eval) whichever the weight (expression) you give him.

For the same reason, you can't use Proguard with Pumpet. Proguard is like fasting — you declare unused stuff as useless and remove it. Pumpet can't fast because otherwise he will lose his gains and won't be able to lift (eval).

For all remaining purposes Skummet and Pumpet are exactly the same. They might probably become a single JAR again when I perform the java.lang.Compiler refactor. Still, the idea is the following: if you don't need eval in your program/application, you want to be able to reduce the binary size and memory usage by removing unused functions, and you want every bit of fast startup — go Skummet. If you need that dynamic compilation — choose Pumpet.

250 ms, Carl!

Skummet on desktop — 250 ms startup time!

That's right. With Skummet version I just deployed, a hello world application finishes on average in 300 milliseconds on my half-decayed Thinkpad X220. For reference, Java hello-world finishes in 80 ms. I think that in absolute numbers this is a huge victory.

Although I pushed the boundaries even further. By replacing the vanilla clojure.lang.Compiler class with the stubbed one in the final compiled package I reduced the startup time to 250 ms. The only problem I have with that is that I don't know how to package two Compiler classes (original and stubbed) into the same jar. To be able to do that sensibly a significant refactoring of Compiler class is required which I don't want to do just yet. But it will be done at some point.

Locating lean vars in runtime

I made possible for RT.var to find the lean Vars in runtime. It uses reflection, but otherwise is pretty fast. This reduces the necessity to mark Vars used like that as non-lean, and overall removes a lot of headache and potential bugs.

Reworked algorithm for lean-compiling recursive definitions

Once I started singletoning functions (removing references to them from __init classes), I faced the problem of compiling recursive functions. The problem got even worse when the function was multi-arity one, and had some internal subfunctions that called the top-level one (see clojure.core/str). Previously I had to resort to not-singletoning them but keeping their instances in the namespace class. Now the code responsible for that is rewritten, and even more functions are singletoned now (thus can be removed by Proguard if unused).

Delayed core_print.clj initialization

The file core_print.clj contains a lot of calls to defmethod and prefer-method. Running them all takes some time. Specially for Android where you usually don't start printing Clojure data structures right away, this initialization can be delayed until pr is called for the first time, which also squeezes out some sweet milliseconds from the startup time.

Links

I pushed the new versions of both compilers, versions below:

latest-version.svg

latest-version.svg

More information on Skummet and how to use it can be found here.

I haven't tested them on Android just yet but they should be working. Now that clojure 1.7 is out, I want to carefully polish them before releasing 1.7 versions myself. Meanwhile, please try them, report bugs and keep working out!

UPDATE: Just fixed a very nasty reflection bug that short-circuited compilation with lein-skummet. r4's should work now.

by Clojure on Android at July 01, 2015 08:48 PM

Planet Clojure

Clojure from the ground up: debugging

Writing software can be an exercise in frustration. Useless error messages, difficult-to-reproduce bugs, missing stacktrace information, obscure functions without documentation, and unmaintained libraries all stand in our way. As software engineers, our most useful skill isn’t so much knowing how to solve a problem as knowing how to explore a problem that we haven’t seen before. Experience is important, but even experienced engineers face unfamiliar bugs every day. When a problem doesn’t bear a resemblance to anything we’ve seen before, we fall back on general cognitive strategies to explore–and ultimately solve–the problem.

There’s an excellent book by the mathematician George Polya: How to Solve It, which tries to catalogue how successful mathematicians approach unfamiliar problems. When I catch myself banging my head against a problem for more than a few minutes, I try to back up and consider his principles. Sometimes, just taking the time to slow down and reflect can get me out of a rut.

  1. Understand the problem.
  2. Devise a plan.
  3. Carry out the plan
  4. Look back

Seems easy enough, right? Let’s go a little deeper.

Understanding the problem

Well obviously there’s a problem, right? The program failed to compile, or a test spat out bizarre numbers, or you hit an unexpected exception. But try to dig a little deeper than that. Just having a careful description of the problem can make the solution obvious.

Our audit program detected that users can double-withdraw cash from their accounts.

What does your program do? Chances are your program is large and complex, so try to isolate the problem as much as possible. Find preconditions where the error holds.

The problem occurs after multiple transfers between accounts.

Identify specific lines of code from the stacktrace that are involved, specific data that’s being passed around. Can you find a particular function that’s misbehaving?

The balance transfer function sometimes doesn’t increase or decrease the account values correctly.

What are that function’s inputs and outputs? Are the inputs what you expected? What did you expect the result to be, given those arguments? It’s not enough to know “it doesn’t work”–you need to know exactly what should have happened. Try to find conditions where the function works correctly, so you can map out the boundaries of the problem.

Trying to transfer $100 from A to B works as expected, as does a transfer of $50 from B to A. Running a million random transfers between accounts, sequentially, results in correct balances. The problem only seems to happen in production.

If your function–or functions it calls–uses mutable state, like an agent, atom, or ref, the value of those references matters too. This is why you should avoid mutable state wherever possible: each mutable variable introduces another dimension of possible behaviors for your program. Print out those values when they’re read, and after they’re written, to get a description of what the function is actually doing. I am a huge believer in sprinkling (prn x) throughout one’s code to print how state evolves when the program runs.

Each balance is stored in a separate atom. When two transfers happen at the same time involving the same accounts, the new value of one or both atoms may not reflect the transfer correctly.

Look for invariants: properties that should always be true of a program. Devise a test to look for where those invariants are broken. Consider each individual step of the program: does it preserve all the invariants you need? If it doesn’t, what ensures those invariants are restored correctly?

The total amount of money in the system should be constant–but sometimes changes!

Draw diagrams, and invent a notation to talk about the problem. If you’re accessing fields in a vector, try drawing the vector as a set of boxes, and drawing the fields it accesses, step by step on paper. If you’re manipulating a tree, draw one! Figure out a way to write down the state of the system: in letters, numbers, arrows, graphs, whatever you can dream up.

Transferring $5 from A to B in transaction 1, and $5 from B to A in transaction 2: Transaction | A | B -------------+-----+----- txn1 read | 10 | 10 ; Transaction 1 sees 10, 10 txn1 write A | 5 | 10 ; A and B now out-of-sync txn2 read | 5 | 10 ; Transaction 2 sees 5, 10 txn1 write B | 5 | 15 ; Transaction 1 completes txn2 write A | 10 | 15 ; Transaction 2 writes based on out-of-sync read txn2 write B | 5 | 5 ; Should have been 10, 10!

This doesn’t solve the problem, but helps us explore the problem in depth. Sometimes this makes the solution obvious–other times, we’re just left with a pile of disjoint facts. Even if things look jumbled-up and confusing, don’t despair! Exploring gives the brain the pieces; it’ll link them together over time.

Armed with a detailed description of the problem, we’re much better equipped to solve it.

Devise a plan

Our brains are excellent pattern-matchers, but not that great at tracking abstract logical operations. Try changing your viewpoint: rotating the problem into a representation that’s a little more tractable for your mind. Is there a similar problem you’ve seen in the past? Is this a well-known problem?

Make sure you know how to check the solution. With the problem isolated to a single function, we can write a test case that verifies the account balances are correct. Then we can experiment freely, and have some confidence that we’ve actually found a solution.

Can you solve a related problem? If only concurrent transfers trigger the problem, could we solve the issue by ensuring transactions never take place concurrently–e.g. by wrapping the operation in a lock? Could we solve it by logging all transactions, and replaying the log? Is there a simpler variant of the problem that might be tractable–maybe one that always overcounts, but never undercounts?

Consider your assumptions. We rely on layers of abstraction in writing software–that changing a variable is atomic, that lexical variables don’t change, that adding 1 and 1 always gives 2. Sometimes, parts of the computer fail to guarantee those abstractions hold. The CPU might–very rarely–fail to divide numbers correctly. A library might, for supposedly valid input, spit out a bad result. A numeric algorithm might fail to converge, and spit out wrong numbers. To avoid questioning everything, start in your own code, and work your way down to the assumptions themselves. See if you can devise tests that check the language or library is behaving as you expect.

Can you avoid solving the problem altogether? Is there a library, database, or language feature that does transaction management for us? Is integrating that library worth the reduced complexity in our application?

We’re not mathematicians; we’re engineers. Part theorist, yes, but also part mechanic. Some problems take a more abstract approach, and others are better approached by tapping it with a wrench and checking the service manual. If other people have solved your problem already, using their solution can be much simpler than devising your own.

Can you think of a way to get more diagnostic information? Perhaps we could log more data from the functions that are misbehaving, or find a way to dump and replay transactions from the live program. Some problems disappear when instrumented; these are the hardest to solve, but also the most rewarding.

Combine key phrases in a Google search: the name of the library you’re using, the type of exception thrown, any error codes or log messages. Often you’ll find a StackOverflow result, a mailing list post, or a Github issue that describes your problem. This works well when you know the technical terms for your problem–in our case, that we’re performing a atomic, transactional transfer between two variables. Sometimes, though, you don’t know the established names for your problem, and have to resort to blind queries like “variables out of sync” or “overwritten data”–which are much more difficult.

When you get stuck exploring on your own, try asking for help. Collect your description of the problem, the steps you took, and what you expected the program to do. Include any stacktraces or error messages, log files, and the smallest section of source code required to reproduce the problem. Also include the versions of software used–in Clojure, typically the JVM version (java -version), Clojure version (project.clj), and any other relevant library versions.

If the project has a Github page or public issue tracker, like Jira, you can try filing an issue there. Here’s a particularly well-written issue filed by a user on one of my projects. Note that this user included installation instructions, the command they ran, and the stacktrace it printed. The more specific a description you provide, the easier it is for someone else to understand your problem and help!

Sometimes you need to talk through a problem interactively. For that, I prefer IRC–many projects have a channel on the Freenode IRC network where you can ask basic questions. Remember to be respectful of the channel’s time; there may be hundreds of users present, and they have to sort through everything you write. Paste your problem description into a pastebin like Gist, then mention the link in IRC with a short–say a few sentences–description of the problem. I try asking in a channel devoted to a specific library or program first, then back off to a more general channel, like #clojure. There’s no need to ask “Can I ask a question” first–just jump in.

Since the transactional problem we’ve been exploring seems like a general issue with atoms, I might ask in #clojure

aphyr > Hi! Does anyone know the right way to change multiple atoms at the same time? aphyr > This function and test case (http://gist.github.com/...) seems to double- or under-count when invoked concurrently.

Finally, you can join the project’s email list, and ask your question there. Turnaround times are longer, but you’ll often find a more in-depth response to your question via email. This applies especially if you and the maintainer are in different time zones, or if they’re busy with life. You can also ask specific problems on StackOverflow or other message boards; users there can be incredibly helpful.

Remember, other engineers are taking time away from their work, family, friends, and hobbies to help you. It’s always polite to give them time to answer first–they may have other priorities. A sincere thank-you is always appreciated–as is paying it forward by answering other users' questions on the list or channel!

Dealing with abuse

Sadly, some women, LGBT people, and so on experience harassment on IRC or in other discussion circles. They may be asked inappropriate personal questions, insulted, threatened, assumed to be straight, to be a man, and so on. Sometimes other users will attack questioners for inexperience. Exclusion can be overt (“Read the fucking docs, faggot!”) or more subtle (“Hey dudes, what’s up?”). It only takes one hurtful experience this to sour someone on an entire community.

If this happens to you, place your own well-being first. You are not obligated to fix anyone else’s problems, or to remain in a social context that makes you uncomfortable.

That said, be aware the other people in a channel may not share your culture. English may not be their main language, or they may have said something hurtful without realizing its impact. Explaining how the comment made you feel can jar a well-meaning but unaware person into reconsidering their actions.

Other times, people are just mean–and it only takes one to ruin everybody’s day. When this happens, you can appeal to a moderator. On IRC, moderators are sometimes identified by an @ sign in front of their name; on forums, they may have a special mark on their username or profile. Large projects may have an official policy for reporting abuse on their website or in the channel topic. If there’s no policy, try asking whoever seems in charge for help. Most projects have a primary maintainer or community manager with the power to mute or ban malicious users.

Again, these ways of dealing with abuse are optional. You have no responsibility to provide others with endless patience, and it is not your responsibility to fix a toxic culture. You can always log off and try something else. There are many communities which will welcome and support you–it may just take a few tries to find the right fit.

If you don’t find community, you can build it. Starting your own IRC channel, mailing list, or discussion group with a few friends can be a great way to help each other learn in a supportive environment. And if trolls ever come calling, you’ll be able to ban them personally.

Now, back to problem-solving.

Execute the plan

Sometimes we can make a quick fix in the codebase, test it by hand, and move on. But for more serious problems, we’ll need a more involved process. I always try to get a reproducible test suite–one that runs in a matter of seconds–so that I can continually check my work.

Persist. Many problems require grinding away for some time. Mix blind experimentation with sitting back and planning. Periodically re-evaluate your work–have you made progress? Identified a sub-problem that can be solved independently? Developed a new notation?

If you get stuck, try a new tack. Save your approach as a comment or using git stash, and start fresh. Maybe using a different concurrency primitive is in order, or rephrasing the data structure entirely. Take a reading break and review the documentation for the library you’re trying to use. Read the source code for the functions you’re calling–even if you don’t understand exactly what it does, it might give you clues to how things work under the hood.

Bounce your problem off a friend. Grab a sheet of paper or whiteboard, describe the problem, and work through your thinking with that person. Their understanding of the problem might be totally off-base, but can still give you valuable insight. Maybe they know exactly what the problem is, and can point you to a solution in thirty seconds!

Finally, take a break. Go home. Go for a walk. Lift heavy, run hard, space out, drink with your friends, practice music, read a book. Just before sleep, go over the problem once more in your head; I often wake up with a new algorithm or new questions burning to get out. Your unconscious mind can come up with unexpected insights if given time away from the problem!

Some folks swear by time in the shower, others by hiking, or with pen and paper in a hammock. Find what works for you! The important thing seems to be giving yourself away from struggling with the problem.

Look back

Chances are you’ll know as soon as your solution works. The program compiles, transactions generate the correct amounts, etc. Now’s an important time to solidify your work.

Bolster your tests. You may have made the problem less likely, but not actually solved it. Try a more aggressive, randomized test; one that runs for longer, that generates a broader class of input. Try it on a copy of the production workload before deploying your change.

Identify why the new system works. Pasting something in from StackOverflow may get you through the day, but won’t help you solve similar problems in the future. Try to really understand why the program went wrong, and how the new pieces work together to prevent the problem. Is there a more general underlying problem? Could you generalize your technique to solve a related problem? If you’ll encounter this type of issue frequently, could you build a function or library to help build other solutions?

Document the solution. Write down your description of the problem, and why your changes fix it, as comments in the source code. Use that same description of the solution in your commit message, or attach it as a comment to the resources you used online, so that other people can come to the same understanding.

Debugging Clojure

With these general strategies in mind, I’d like to talk specifically about the debugging Clojure code–especially understanding its stacktraces. Consider this simple program for baking cakes:

(ns scratch.debugging) (defn bake "Bakes a cake for a certain amount of time, returning a cake with a new :tastiness level." [pie temp time] (assoc pie :tastiness (condp (* temp time) < 400 :burned 350 :perfect 300 :soggy)))

And in the REPL

user=> (bake {:flavor :blackberry} 375 10.25) ClassCastException java.lang.Double cannot be cast to clojure.lang.IFn scratch.debugging/bake (debugging.clj:8)

This is not particularly helpful. Let’s print a full stacktrace using pst:

user=> (pst) ClassCastException java.lang.Double cannot be cast to clojure.lang.IFn scratch.debugging/bake (debugging.clj:8) user/eval1223 (form-init4495957503656407289.clj:1) clojure.lang.Compiler.eval (Compiler.java:6619) clojure.lang.Compiler.eval (Compiler.java:6582) clojure.core/eval (core.clj:2852) clojure.main/repl/read-eval-print--6588/fn--6591 (main.clj:259) clojure.main/repl/read-eval-print--6588 (main.clj:259) clojure.main/repl/fn--6597 (main.clj:277) clojure.main/repl (main.clj:277) clojure.tools.nrepl.middleware.interruptible-eval/evaluate/fn--591 (interruptible_eval.clj:56) clojure.core/apply (core.clj:617) clojure.core/with-bindings* (core.clj:1788)

The first line tells us the type of the error: a ClassCastException. Then there’s some explanatory text: we can’t cast a java.lang.Double to a clojure.lang.IFn. The indented lines show the functions that led to the error. The first line is the deepest function, where the error actually occurred: the bake function in the scratch.debugging namespace. In parentheses is the file name (debugging.clj) and line number (8) from the code that caused the error. Each following line shows the function that called the previous line. In the REPL, our code is invoked from a special function compiled by the REPL itself–with an automatically generated name like user/eval1223, and that function is invoked by the Clojure compiler, and the REPL tooling. Once we see something like Compiler.eval at the repl, we can generally skip the rest.

As a general rule, we want to look at the deepest (earliest) point in the stacktrace that we wrote. Sometimes an error will arise from deep within a library or Clojure itself–but it was probably invoked by our code somewhere. We’ll skim down the lines until we find our namespace, and start our investigation at that point.

Our case is simple: bake.clj, on line 8, seems to be the culprit.

(condp (* temp time) <

Now let’s consider the error itself: ClassCastException: java.lang.Double cannot be cast to clojure.lang.IFn. This implies we had a Double and tried to cast it to an IFn–but what does “cast” mean? For that matter, what’s a Double, or an IFn?

A quick google search for java.lang.Double reveals that it’s a class (a Java type) with some basic documentation. “The Double class wraps a value of the primitive type double in an object” is not particularly informative–but the “class hierarchy” at the top of the page shows that a Double is a kind of java.lang.Number. Let’s experiment at the REPL:

user=> (type 4) java.lang.Long user=> (type 4.5) java.lang.Double

Indeed: decimal numbers in Clojure appear to be doubles. One of the expressions in that condp call was probably a decimal. At first we might suspect the literal values 300, 350, or 400–but those are Longs, not Doubles. The only Double we passed in was the time duration 10.25–which appears in condp as (* temp time). That first argument was a Double, but should have been an IFn.

What the heck is an IFn? Its source code has a comment:

IFn provides complete access to invoking any of Clojure’s API’s. You can also access any other library written in Clojure, after adding either its source or compiled form to the classpath.

So IFn has to do with invoking Clojure’s API. Ah–Fn probably stands for function–and this class is chock full of things like invoke(Object arg1, Object arg2). That suggests that IFn is about calling functions. And the I? Google suggests it’s a Java convention for an interface–whatever that is. Remember, we don’t have to understand everything–just enough to get by. There’s plenty to explore later.

Let’s check our hypothesis in the repl:

user=> (instance? clojure.lang.IFn 2.5) false user=> (instance? clojure.lang.IFn conj) true user=> (instance? clojure.lang.IFn (fn [x] (inc x))) true

So Doubles aren’t IFns–but Clojure built-in functions, and anonymous functions, both are. Let’s double-check the docs for condp again:

user=> (doc condp) ------------------------- clojure.core/condp ([pred expr & clauses]) Macro Takes a binary predicate, an expression, and a set of clauses. Each clause can take the form of either: test-expr result-expr test-expr :>> result-fn Note :>> is an ordinary keyword. For each clause, (pred test-expr expr) is evaluated. If it returns logical true, the clause is a match. If a binary clause matches, the result-expr is returned, if a ternary clause matches, its result-fn, which must be a unary function, is called with the result of the predicate as its argument, the result of that call being the return value of condp. A single default expression can follow the clauses, and its value will be returned if no clause matches. If no default expression is provided and no clause matches, an IllegalArgumentException is thrown.clj

That’s a lot to take in! No wonder we got it wrong! We’ll take it slow, and look at the arguments.

(condp (* temp time) <

Our pred was (* temp time) (a Double), and our expr was the comparison function <. For each clause, (pred test-expr expr) is evaluated, so that would expand to something like

((* temp time) 400 <)

Which evaluates to something like

(123.45 400 <)

But this isn’t a valid Lisp program! It starts with a number, not a function. We should have written (< 123.45 400). Our arguments are backwards!

(defn bake "Bakes a cake for a certain amount of time, returning a cake with a new :tastiness level." [pie temp time] (assoc pie :tastiness (condp < (* temp time) 400 :burned 350 :perfect 300 :soggy))) user=> (use 'scratch.debugging :reload) nil user=> (bake {:flavor :chocolate} 375 10.25) {:tastiness :burned, :flavor :chocolate} user=> (bake {:flavor :chocolate} 450 0.8) {:tastiness :perfect, :flavor :chocolate}

Mission accomplished! We read the stacktrace as a path to a part of the program where things went wrong. We identified the deepest part of that path in our code, and looked for a problem there. We discovered that we had reversed the arguments to a function, and after some research and experimentation in the REPL, figured out the right order.

An aside on types: some languages have a stricter type system than Clojure’s, in which the types of variables are explicitly declared in the program’s source code. Those languages can detect type errors–when a variable of one type is used in place of another, incompatible, type–and offer more precise feedback. In Clojure, the compiler does not generally enforce types at compile time, which allows for significant flexibility–but requires more rigorous testing to expose these errors.

Higher order stacktraces

The stacktrace shows us a path through the program, moving downwards through functions. However, that path may not be straightforward. When data is handed off from one part of the program to another, the stacktrace may not show the origin of an error. When functions are handed off from one part of the program to another, the resulting traces can be tricky to interpret indeed.

For instance, say we wanted to make some picture frames out of wood, but didn’t know how much wood to buy. We might sketch out a program like this:

(defn perimeter "Given a rectangle, returns a vector of its edge lengths." [rect] [(:x rect) (:y rect) (:z rect) (:y rect)]) (defn frame "Given a mat width, and a photo rectangle, figure out the size of the frame required by adding the mat width around all edges of the photo." [mat-width rect] (let [margin (* 2 rect)] {:x (+ margin (:x rect)) :y (+ margin (:y rect))})) (def failure-rate "Sometimes the wood is knotty or we screw up a cut. We'll assume we need a spare segment once every 8." 1/8) (defn spares "Given a list of segments, figure out roughly how many of each distinct size will go bad, and emit a sequence of spare segments, assuming we screw up `failure-rate` of them." [segments] (->> segments ; Compute a map of each segment length to the number of ; segments we'll need of that size. frequencies ; Make a list of spares for each segment length, ; based on how often we think we'll screw up. (mapcat (fn [ [segment n]] (repeat (* failure-rate n) segment))))) (def cut-size "How much extra wood do we need for each cut? Let's say a mitred cut for a 1-inch frame needs a full inch." 1) (defn total-wood [mat-width photos] "Given a mat width and a collection of photos, compute the total linear amount of wood we need to buy in order to make frames for each, given a 2-inch mat." (let [segments (->> photos ; Convert photos to frame dimensions (map (partial frame mat-width)) ; Convert frames to segments (mapcat perimeter))] ; Now, take segments (->> segments ; Add the spares (concat (spares segments)) ; Include a cut between each segment (interpose cut-size) ; And sum the whole shebang. (reduce +)))) (->> [{:x 8 :y 10} {:x 10 :y 8} {:x 20 :y 30}] (total-wood 2) (println "total inches:"))

Running this program yields a curious stacktrace. We’ll print the full trace (not the shortened one that comes with pst) for the last exception *e with the .printStackTrace function.

user=> (.printStackTrace *e) java.lang.ClassCastException: clojure.lang.PersistentArrayMap cannot be cast to java.lang.Number, compiling:(scratch/debugging.clj:73:23) at clojure.lang.Compiler.load(Compiler.java:7142) at clojure.lang.RT.loadResourceScript(RT.java:370) at clojure.lang.RT.loadResourceScript(RT.java:361) at clojure.lang.RT.load(RT.java:440) at clojure.lang.RT.load(RT.java:411) ... at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassCastException: clojure.lang.PersistentArrayMap cannot be cast to java.lang.Number at clojure.lang.Numbers.multiply(Numbers.java:146) at clojure.lang.Numbers.multiply(Numbers.java:3659) at scratch.debugging$frame.invoke(debugging.clj:26) at clojure.lang.AFn.applyToHelper(AFn.java:156) at clojure.lang.AFn.applyTo(AFn.java:144) at clojure.core$apply.invoke(core.clj:626) at clojure.core$partial$fn__4228.doInvoke(core.clj:2468) at clojure.lang.RestFn.invoke(RestFn.java:408) at clojure.core$map$fn__4245.invoke(core.clj:2557) at clojure.lang.LazySeq.sval(LazySeq.java:40) at clojure.lang.LazySeq.seq(LazySeq.java:49) at clojure.lang.RT.seq(RT.java:484) at clojure.core$seq.invoke(core.clj:133) at clojure.core$map$fn__4245.invoke(core.clj:2551) at clojure.lang.LazySeq.sval(LazySeq.java:40) at clojure.lang.LazySeq.seq(LazySeq.java:49) at clojure.lang.RT.seq(RT.java:484) at clojure.core$seq.invoke(core.clj:133) at clojure.core$apply.invoke(core.clj:624) at clojure.core$mapcat.doInvoke(core.clj:2586) at clojure.lang.RestFn.invoke(RestFn.java:423) at scratch.debugging$total_wood.invoke(debugging.clj:62) ...

First: this trace has two parts. The top-level error (a CompilerException) appears first, and is followed by the exception that caused the CompilerException: a ClassCastException. This makes the stacktrace read somewhat out of order, since the deepest part of the trace occurs in the first line of the last exception. We read C B A then F E D. This is an old convention in the Java language, and the cause of no end of frustration.

Notice that this representation of the stacktrace is less friendly than (pst). We’re seeing the Java Virtual Machine (JVM)’s internal representation of Clojure functions, which look like clojure.core$partial$fn__4228.doInvoke. This corresponds to the namespace clojure.core, in which there is a function called partial, inside of which is an anonymous function, here named fn__4228. Calling a Clojure function is written, in the JVM, as .invoke or .doInvoke.

So: the root cause was a ClassCastException, and it tells us that Clojure expected a java.lang.Number, but found a PersistentArrayMap. We might guess that PersistentArrayMap is something to do with the map data structure, which we used in this program:

user=> (type {:x 1}) clojure.lang.PersistentArrayMap

And we’d be right. We can also tell, by reading down the stacktrace looking for our scratch.debugging namespace, where the error took place: scratch.debugging$frame, on line 26.

(let [margin (* 2 rect)]

There’s our multiplication operation *, which we might assume expands to clojure.lang.Numbers.multiply. But the path to the error is odd.

(->> photos ; Convert photos to frame dimensions (map (partial frame mat-width))

In total-wood, we call (map (partial frame mat-width) photos) right away, so we’d expect the stacktrace to go from total-wood to map to frame. But this is not what happens. Instead, total-wood invokes something called RestFn–a piece of Clojure plumbing–which in turn calls mapcat.

at clojure.core$mapcat.doInvoke(core.clj:2586) at clojure.lang.RestFn.invoke(RestFn.java:423) at scratch.debugging$total_wood.invoke(debugging.clj:62)

Why doesn’t total-wood call map first? Well it did–but map doesn’t actually apply its function to anything in the photos vector when invoked. Instead, it returns a lazy sequence–one which applies frame only when elements are asked for.

user=> (type (map inc (range 10))) clojure.lang.LazySeq

Inside each LazySeq is a box containing a function. When you ask a LazySeq for its first value, it calls that function to return a new sequence–and that’s when frame gets invoked. What we’re seeing in this stacktrace is the LazySeq internal machinery at work–mapcat asks it for a value, and the LazySeq asks map to generate that value.

at clojure.core$partial$fn__4228.doInvoke(core.clj:2468) at clojure.lang.RestFn.invoke(RestFn.java:408) at clojure.core$map$fn__4245.invoke(core.clj:2557) at clojure.lang.LazySeq.sval(LazySeq.java:40) at clojure.lang.LazySeq.seq(LazySeq.java:49) at clojure.lang.RT.seq(RT.java:484) at clojure.core$seq.invoke(core.clj:133) at clojure.core$map$fn__4245.invoke(core.clj:2551) at clojure.lang.LazySeq.sval(LazySeq.java:40) at clojure.lang.LazySeq.seq(LazySeq.java:49) at clojure.lang.RT.seq(RT.java:484) at clojure.core$seq.invoke(core.clj:133) at clojure.core$apply.invoke(core.clj:624) at clojure.core$mapcat.doInvoke(core.clj:2586) at clojure.lang.RestFn.invoke(RestFn.java:423) at scratch.debugging$total_wood.invoke(debugging.clj:62)

In fact we pass through map’s laziness twice here: a quick peek at (source mapcat) shows that it expands into a map call itself, and then there’s a second map: the one we created in in total-wood. Then an odd thing happens–we hit something called clojure.core$partial$fn__4228.

(map (partial frame mat-width) photos)

The frame function takes two arguments: a mat width and a photo. We wanted a function that takes just one argument: a photo. (partial frame mat-width) took mat-width and generated a new function which takes one arg–call it photo–and calls (frame mad-width photo). That automatically generated function, returned by partial, is what map uses to generate new elements of its sequence on demand.

user=> (partial + 1) #<core$partial$fn__4228 clojure.core$partial$fn__4228@243634f2> user=> ((partial + 1) 4) 5

That’s why we see control flow through clojure.core$partial$fn__4228 (an anonymous function defined inside clojure.core/partial) on the way to frame.

Caused by: java.lang.ClassCastException: clojure.lang.PersistentArrayMap cannot be cast to java.lang.Number at clojure.lang.Numbers.multiply(Numbers.java:146) at clojure.lang.Numbers.multiply(Numbers.java:3659) at scratch.debugging$frame.invoke(debugging.clj:26) at clojure.lang.AFn.applyToHelper(AFn.java:156) at clojure.lang.AFn.applyTo(AFn.java:144) at clojure.core$apply.invoke(core.clj:626) at clojure.core$partial$fn__4228.doInvoke(core.clj:2468)

And there’s our suspect! scratch.debugging/frame, at line 26. To return to that line again:

(let [margin (* 2 rect)]

* is a multiplication, and 2 is obviously a number, but rectrect is a map here. Aha! We meant to multiply the mat-width by two, not the rectangle.

(defn frame "Given a mat width, and a photo rectangle, figure out the size of the frame required by adding the mat width around all edges of the photo." [mat-width rect] (let [margin (* 2 mat-width)] {:x (+ margin (:x rect)) :y (+ margin (:y rect))}))

I believe we’ve fixed the bug, then. Let’s give it a shot!

The unbearable lightness of nil

There’s one more bug lurking in this program. This one’s stacktrace is short.

user=> (use 'scratch.debugging :reload) CompilerException java.lang.NullPointerException, compiling:(scratch/debugging.clj:73:23) user=> (pst) CompilerException java.lang.NullPointerException, compiling:(scratch/debugging.clj:73:23) clojure.lang.Compiler.load (Compiler.java:7142) clojure.lang.RT.loadResourceScript (RT.java:370) clojure.lang.RT.loadResourceScript (RT.java:361) clojure.lang.RT.load (RT.java:440) clojure.lang.RT.load (RT.java:411) clojure.core/load/fn--5066 (core.clj:5641) clojure.core/load (core.clj:5640) clojure.core/load-one (core.clj:5446) clojure.core/load-lib/fn--5015 (core.clj:5486) clojure.core/load-lib (core.clj:5485) clojure.core/apply (core.clj:626) clojure.core/load-libs (core.clj:5524) Caused by: NullPointerException clojure.lang.Numbers.ops (Numbers.java:961) clojure.lang.Numbers.add (Numbers.java:126) clojure.core/+ (core.clj:951) clojure.core.protocols/fn--6086 (protocols.clj:143) clojure.core.protocols/fn--6057/G--6052--6066 (protocols.clj:19) clojure.core.protocols/seq-reduce (protocols.clj:27) clojure.core.protocols/fn--6078 (protocols.clj:53) clojure.core.protocols/fn--6031/G--6026--6044 (protocols.clj:13) clojure.core/reduce (core.clj:6287) scratch.debugging/total-wood (debugging.clj:69) scratch.debugging/eval1560 (debugging.clj:81) clojure.lang.Compiler.eval (Compiler.java:6703)

On line 69, total-wood calls reduce, which dives through a series of functions from clojure.core.protocols before emerging in +: the function we passed to reduce. Reduce is trying to combine two elements from its collection of wood segments using +, but one of them was nil. Clojure calls this a NullPointerException. In total-wood, we constructed the sequence of segments this way:

(let [segments (->> photos ; Convert photos to frame dimensions (map (partial frame mat-width)) ; Convert frames to segments (mapcat perimeter))] ; Now, take segments (->> segments ; Add the spares (concat (spares segments)) ; Include a cut between each segment (interpose cut-size) ; And sum the whole shebang. (reduce +))))

Where did the nil value come from? The stacktrace doesn’t say, because the sequence reduce is traversing didn’t have any problem producing the nil. reduce asked for a value and the sequence happily produced a nil. We only had a problem when it came time to combine the nil with the next value, using +.

A stacktrace like this is something like a murder mystery: we know the program died in the reducer, that it was shot with a +, and the bullet was a nil–but we don’t know where the bullet came from. The trail runs cold. We need more forensic information–more hints about the nil’s origin–to find the culprit.

Again, this is a class of error largely preventable with static type systems. If you have worked with a statically typed language in the past, it may be interesting to consider that almost every Clojure function takes Option[A] and does something more-or-less sensible, returning Option[B]. Whether the error propagates as a nil or an Option, there can be similar difficulties in localizing the cause of the problem.

Let’s try printing out the state as reduce goes along:

(->> segments ; Add the spares (concat (spares segments)) ; Include a cut between each segment (interpose cut-size) ; And sum the whole shebang. (reduce (fn [acc x] (prn acc x) (+ acc x)))))) user=> (use 'scratch.debugging :reload) 12 1 13 14 27 1 28 nil CompilerException java.lang.NullPointerException, compiling:(scratch/debugging.clj:73:56)

Not every value is nil! There’s a 14 there which looks like a plausible segment for a frame, and two one-inch buffers from cut-size. We can rule out interpose because it inserts a 1 every time, and that 1 reduces correctly. But where’s that nil coming from? Is from segments or (spares segments)?

(let [segments (->> photos ; Convert photos to frame dimensions (map (partial frame mat-width)) ; Convert frames to segments (mapcat perimeter))] (prn :segments segments) user=> (use 'scratch.debugging :reload) :segments (12 14 nil 14 14 12 nil 12 24 34 nil 34)

It is present in segments. Let’s trace it backwards through the sequence’s creation. It’d be handy to have a function like prn that returned its input, so we could spy on values as they flowed through the ->> macro.

(defn spy [& args] (apply prn args) (last args)) (let [segments (->> photos ; Convert photos to frame dimensions (map (partial frame mat-width)) (spy :frames) ; Convert frames to segments (mapcat perimeter))] user=> (use 'scratch.debugging :reload) :frames ({:x 12, :y 14} {:x 14, :y 12} {:x 24, :y 34}) :segments (12 14 nil 14 14 12 nil 12 24 34 nil 34)

Ah! So the frames are intact, but the perimeters are bad. Let’s check the perimeter function:

(defn perimeter "Given a rectangle, returns a vector of its edge lengths." [rect] [(:x rect) (:y rect) (:z rect) (:y rect)])

Spot the typo? We wrote :z instead of :x. Since the frame didn’t have a :z field, it returned nil! That’s the origin of our NullPointerException. With the bug fixed, we can re-run and find:

user=> (use 'scratch.debugging :reload) total inches: 319

Whallah!

Recap

As we solve more and more problems, we get faster at debugging–at skipping over irrelevant log data, figuring out exactly what input was at fault, knowing what terms to search for, and developing a network of peers and mentors to ask for help. But when we encounter unexpected bugs, it can help to fall back on a family of problem-solving tactics.

We explore the problem thoroughly, localizing it to a particular function, variable, or set of inputs. We identify the boundaries of the problem, carving away parts of the system that work as expected. We develop new notation, maps, and diagrams of the problem space, precisely characterizing it in a variety of modes.

With the problem identified, we search for extant solutions–or related problems others have solved in the past. We trawl through issue trackers, mailing list posts, blogs, and forums like Stackoverflow, or, for more theoretical problems, academic papers, Mathworld, and Wikipedia, etc. If searching reveals nothing, we try rephrasing the problem, relaxing the constraints, adding debugging statements, and solving smaller subproblems. When all else fails, we ask for help from our peers, or from the community in IRC, mailing lists, and so on, or just take a break.

We learned to explore Clojure stacktraces as a trail into our programs, leading to the place where an error occurred. But not all paths are linear, and we saw how lazy operations and higher-order functions create inversions and intermediate layers in the stacktrace. Then we learned how to debug values that were distant from the trace, by adding logging statements and working our way closer to the origin.

Programming languages and us, their users, are engaged in a continual dialogue. We may speak more formally, verbosely, with many types and defensive assertions–or we may speak quickly, generally, in fuzzy terms. The more precise we are with the specifications of our program’s types, the more the program can assist us when things go wrong. Conversely, those specifications harden our programs into strong but rigid forms, and rigid structures are harder to bend into new shapes.

In Clojure we strike a more dynamic balance: we speak in generalities, but we pay for that flexibility. Our errors are harder to trace to their origins. While the Clojure compiler can warn us of some errors, like mis-spelled variable names, it cannot (without a library like core.typed) tell us when we have incorrectly assumed an object will be of a certain type. Even very rigid languages, like Haskell, cannot identify some errors, like reversing the arguments to a subtraction function. Some tests are always necessary, though types are a huge boon.

No matter what language we write in, we use a balance of types and tests to validate our assumptions, both when the program is compiled and when it is run.

The errors that arise in compilation or runtime aren’t rebukes so much as hints. Don’t despair! They point the way towards understanding one’s program in more detail–though the errors may be cryptic. Over time we get better at reading our language’s errors and making our programs more robust.

by Aphyr at July 01, 2015 08:44 PM

/r/netsec

StackOverflow

Run Spark job on Playframework + Spark Master/Worker in one Mac

I am trying to run Spark job on Playframework + Spark Master/Worker in one Mac.

When job ran, I encountered java.lang.ClassNotFoundException.

Would you teach me how to solve it?

Here is code in Github

Envrionments:

Mac 10.9.5
Java 1.7.0_71
Play 2.2.3
Spark 1.1.1

Setup history:

> cd ~
> git clone git@github.com:apache/spark.git
> cd spark
> git checkout -b v1.1.1 v1.1.1
> sbt/sbt assembly
> vi ~/.bashrc
export SPARK_HOME=/Users/tomoya/spark
> . ~/.bashrc
> hostname
Tomoya-Igarashis-MacBook-Air.local
> vi $SPARK_HOME/conf/slaves
Tomoya-Igarashis-MacBook-Air.local
> play new spark_cluster_sample
default name
type -> scala

Run history:

> $SPARK_HOME/sbin/start-all.sh
> jps
> which play
/Users/tomoya/play/play
> git clone https://github.com/TomoyaIgarashi/spark_cluster_sample
> cd spark_cluster_sample
> play run

Error trace:

play.api.Application$$anon$1: Execution exception[[SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, 192.168.1.29):
    java.lang.ClassNotFoundException: controllers.Application$$anonfun$index$1$$anonfun$3
    java.net.URLClassLoader$1.run(URLClassLoader.java:372)
    java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    java.security.AccessController.doPrivileged(Native Method)
    java.net.URLClassLoader.findClass(URLClassLoader.java:360)
    java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    java.lang.Class.forName0(Native Method)
    java.lang.Class.forName(Class.java:340)
    org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59)
    java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
    java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
    org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
    org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
    org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
    org.apache.spark.scheduler.Task.run(Task.scala:54)
    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    java.lang.Thread.run(Thread.java:745)
Driver stacktrace:]]
    at play.api.Application$class.handleError(Application.scala:293) ~[play_2.10.jar:2.2.3]
    at play.api.DefaultApplication.handleError(Application.scala:399) [play_2.10.jar:2.2.3]
    at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$13$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:166) [play_2.10.jar:2.2.3]
    at play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$13$$anonfun$apply$1.applyOrElse(PlayDefaultUpstreamHandler.scala:163) [play_2.10.jar:2.2.3]
    at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33) [scala-library-2.10.4.jar:na]
    at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185) [scala-library-2.10.4.jar:na]
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 0.0 failed 4 times, most recent failure: Lost task 1.3 in stage 0.0 (TID 6, 192.168.1.29): java.lang.ClassNotFoundException: controllers.Application$$anonfun$index$1$$anonfun$3
    java.net.URLClassLoader$1.run(URLClassLoader.java:372)
    java.net.URLClassLoader$1.run(URLClassLoader.java:361)
    java.security.AccessController.doPrivileged(Native Method)
    java.net.URLClassLoader.findClass(URLClassLoader.java:360)
    java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    java.lang.Class.forName0(Native Method)
    java.lang.Class.forName(Class.java:340)
    org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:59)
    java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
    java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
    java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
    java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
    java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
    java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
    org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
    org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
    org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
    org.apache.spark.scheduler.Task.run(Task.scala:54)
    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185) ~[spark-core_2.10-1.1.1.jar:1.1.1]
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174) ~[spark-core_2.10-1.1.1.jar:1.1.1]
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173) ~[spark-core_2.10-1.1.1.jar:1.1.1]
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) ~[scala-library-2.10.4.jar:na]
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) ~[scala-library-2.10.4.jar:na]
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173) ~[spark-core_2.10-1.1.1.jar:1.1.1]

Regards

by TomoyaIgarashi at July 01, 2015 08:37 PM

How would I use slick 3.0 to return one row at a time?

How I would build a scala query to return one row of my table at a time?

My tables are in the following location if they help in answering this question: Slick 3.0 (scala) queries don't return data till they are run multiple times (I think)

 val q5 = for {
  c <- dat.patientsss 
} yield (c.PID, c.Gender, c.Age, c.Ethnicity)

Await.result((db.stream(q5.result).foreach(println)),Duration.Inf)

but instead of printing, I need return each.

by S2C at July 01, 2015 08:33 PM

Python List of tuples in Scala

I am using Jython execute the python code part (a python module with utility functions from existing codebase) that returns a list of tuples, but what I get in scala is a simple flattened list. Any suggestions on the cause would help. Since I am a beginner with with Scala and Jython, this probably might not be the best approach to solve the problem. I call the python function as shown below:

val viaJython = true
val interp = new PythonInterpreter()
val pyCode =
  interp.compile(
    """import myModule as df
       | aList = df.find_date(foundVal)"""
  )

interp.set("foundVal", foundVal)
interp.exec(pyCode)
println(interp.get("aList"))

by Ark at July 01, 2015 08:32 PM

QuantOverflow

how to create folder and csv files for each row in each sheet (VBA) [on hold]

I have an excel workbook. In this workbook,I have 3 worksheets, named by"First","second" and "third". Each sheet has 4 columns such as Date X1,X2,Oneletter.I'm trying to run a VBA program.But I have no clue. Here are what I want to do:

First,I want to create 3 folder named by sheetnames.

Second,I want to create some csv files in each folder.There is one row and one column in each csv.And the content of each csv file is the content of each row in this worksheet,such as "01012000,1,2,A".

At last,I want to save each file as a combination of date and a letter such as "A_07012001.csv".

If anyone can help me a little, that would be greatful.

Thanks in advance

enter image description here

by Kroll DU at July 01, 2015 08:30 PM

Lobsters

StackOverflow

How to add WebJars to my Play app?

In order to use WebJars in my Play app I've added the following route

GET     /webjars/*file              controllers.WebJarAssets.at(file)

In my scala template I've added the following lines:

<link rel='stylesheet' href='@routes.WebJarAssets.at(WebJarAssets.locate("css/bootstrap.min.css"))'>
<link rel='stylesheet' href='@routes.WebJarAssets.at(WebJarAssets.locate("css/bootstrap-theme.min.css"))'>

When I open the app, the css files are perfectly loaded. But as soon as I run activator eclipse, I get the following compilation error:

[info] Compiling 4 Scala sources and 2 Java sources to /Users/d135-1r43/play/blog/target/scala-2.11/classes...
[error] /Users/d135-1r43/play/blog/conf/routes:10: object WebJarAssets is not a member of package controllers
[error] GET     /webjars/*file              controllers.WebJarAssets.at(file)
[error] /Users/d135-1r43/play/blog/conf/routes:10: object WebJarAssets is not a member of package controllers
[error] GET     /webjars/*file              controllers.WebJarAssets.at(file)
[error] /Users/d135-1r43/play/blog/app/views/main.scala.html:8: not found: value WebJarAssets
[error]         <link rel='stylesheet' href='@routes.WebJarAssets.at(WebJarAssets.locate("css/bootstrap.min.css"))'>
[error]                                                              ^
[error] /Users/d135-1r43/play/blog/app/views/main.scala.html:9: not found: value WebJarAssets
[error]         <link rel='stylesheet' href='@routes.WebJarAssets.at(WebJarAssets.locate("css/bootstrap-theme.min.css"))'>
[error]                                                              ^
[error] four errors found
[error] (compile:compile) Compilation failed
[error] Could not create Eclipse project files:
[error] Error evaluating task 'dependencyClasspath': error

Any idea what happened here? Do I need to add something to my classpath?

by d135-1r43 at July 01, 2015 08:14 PM

TheoryOverflow

What is the application of combinatorial game theory

I find Combinatorial Game Theory very interesting as my primary interest is mathematics. My question is why do Computer Scientists (who tend to have a more practical approach) study it as well? Are there some applications of it? This question is similar to this one but I am asking about combinatorial games such as nim. Could someone provide me with a reference?

by Halbort at July 01, 2015 08:06 PM

Lobsters

StackOverflow

How can I deconstruct a Spray API HTTPResponse?

I'm using Spray API (spray-client) to hit an internal Solr URL, I want to be able to parse the response into a Scala case class.

If I just expect and HTTPResponse, I'm getting a value back, but when I try to marshal it into my case class, it fails (I can't produce a message other than null(), because I'm using matching and obviously not getting the right test case.)

I think some of my problem is that it's returning the data in the form of text/plain instead of application/json. When I expect HttpResponse instead of my case class,

val f: Future[HttpResponse] =
    (IO(Http) ? Get("http://1.2.3.4:8983/solr/collection1/select?q=*%3A*&wt=json")).mapTo[HttpResponse]

I get:

HttpResponse(200 OK,HttpEntity(text/plain; charset=UTF-8,
{
  "responseHeader":{"status":0,"QTime":65,"params":{"q":"*:*","wt":"json"}},
  "response":{"numFound":147437873,"start":0,"maxScore":1.0,"docs":
    [
      {"guid":"TLQ0jVlMYCXQrYkBIZHNXfMmifw+3","alias":["greg"],"_version_":1440942010264453120},
      {"guid":"TQsDY1ZG7q+Ne5e6F7qAUhFyomSH9","_version_":1440942010296958976},
      {"guid":"TzWB5grOBAJJZcAQDo2k9xBUVGPFr","alias":["spark"],"_version_":1440942010298007552},
      {"guid":"T0judCG4UI9RYqDDQVcn+gyZEU7Bb","alias":["zombie"],...),List(Connection: close, Content-Type: text/plain; charset=UTF-8),HTTP/1.1)

But when I change that to expect my case class, I can't match. So, how can I marshal the data it returns into a Scala case class? Here's what I have tried:

case class SolrParams(q: String, wt: String)
case class SolrResponseHeader(status: String, qtime: String, params: SolrParams)
case class SolrDoc(guid: String, alias: List[String], version: String)
case class SolrResponse(numFound: Long, start: Long, maxScore: String, docs: List[SolrDoc])

case class SolrApResult(responseHeader: SolrResponseHeader, response: SolrResponse)

object SolrJsonProtocol extends DefaultJsonProtocol {
  implicit val paramsFormat = jsonFormat2(SolrParams)
  implicit val responseHeaderFormat = jsonFormat2(SolrResponseHeader)
  implicit val docFormat = jsonFormat3(SolrDoc)
  implicit val responseFormat = jsonFormat4(SolrResponse)
  implicit def solrApiResultFormat = jsonFormat2(SolrApiFullResult)
}

...

val f: Future[SolrApiResult] =
    (IO(Http) ? Get("http://1.2.3.4:8983/solr/collection1/select?q=*%3A*&wt=json")).mapTo[SolrApiResult]

Which gives me no match in an f onComplete ... structure. Could the issue be that my case classes aren't matching what's being returned, and if so, what suggestions do you have to troubleshoot it better?

I've been all over the docs and they're either incomplete or a bit dated, plus I'm new at this game so that's not helping either.

by jbnunn at July 01, 2015 08:02 PM

Planet Theory

Polytopix news app (for iOS)

I am very excited to announce that our Polytopix news app (for iOS) is now available on the app store. Check it out.

polytopix_app

Polytopix news app (for iOS)


Filed under: Uncategorized

by kintali at July 01, 2015 08:02 PM

QuantOverflow

asian option – exotic option – real data, authentic examples?

I would be pleased if any of You can give me the real example of an asian option (or other exotic option) that is being traded or that is offered by some institution.

I have been searching the whole internet, but only what I can find is how to price asian options, and I want to compare those methods with a real situation.

Best regards,

Pablo

by pablo at July 01, 2015 08:01 PM

StackOverflow

What goes into making a Proxy Server?

I'm pretty new into development and I'm going to be building a Proxy server for work. I'm not really sure what goes into a building a Proxy server and anything I can find is just telling me to install something and set one up; but I want to be able to build my own. I'm going to be working in Scala so what exactly goes into making one and what does it do?

by Ryan Wilson at July 01, 2015 08:01 PM

Fefe

In Belgien ist jetzt eine 24jährige Frau für das ...

In Belgien ist jetzt eine 24jährige Frau für das Euthanasie-Programm zugelassen worden, die an Depressionen leidet. Die Formulierung im Gesetz ist wohl, dass man an nicht behandelbaren unerträglichen Schmerzen leidet.

Nun sind Depressionen nicht mit körperlichen Schmerzen verbunden, aber weil das als unheilbare Krankheit gilt, haben sie wohl eine Ausnahme gemacht. Die Frau hat schon mehrere Selbstmordversuche hinter sich.

Update:

Du schreibst ja "aber weil das als unheilbare Krankheit gilt, haben sie wohl eine Ausnahme gemacht.". Tatsächlich liest man im Gesetzestext aber unter §2 2):

[The physician must in each case]
"be certain of the patient’s constant physical
or mental suffering and of the durable
nature of his/her request"

Das Gesetz ist also sehr wohl auch für mentale Probleme gemeint.

July 01, 2015 08:01 PM

StackOverflow

how to serialize case classes with traits with jsonspray

I understand that if I have:

case class Person(name: String)

I can use

object PersonJsonImplicits extends DefaultJsonProtocol {
  implicit val impPerson = jsonFormat1(Person)
}

and thus serialize it with:

import com.example.PersonJsonImplicits._
import spray.json._
new Person("somename").toJson

however what If i have

trait Animal
case class Person(name: String) extends Animal

and I have somewhere in my code

val animal = ???

and I need to serialize it and I want to use json spray

which serializer should I add I was hoping to have something like:

object AnimalJsonImplicits extends DefaultJsonProtocol {
  implicit val impAnimal = jsonFormat???(Animal)
}

where maybe I needed to add some matcher in order to check of what type is Animal so that if its a person I would direct it to person but found nothing... was reading https://github.com/spray/spray-json and don't understand how to do that..

so how can I serialize the set of

trait Animal
case class Person(name: String) extends Animal

with json spray?

by Jas at July 01, 2015 07:52 PM

How to make use of lambda.r type checking?

I must be using this wrong because, as you can see below, lambda.r's type checking doesn't seem to provide much safety:

library(lambda.r)
x <- Integer(5)
> x
[1] 5
attr(,"class")
[1] "Integer" "numeric"
> x %isa% Integer
[1] TRUE
> 6 %isa% Integer
[1] FALSE
> (x + 1) %isa% Integer
[1] TRUE
> (x + .5) %isa% Integer
[1] TRUE
> (x + .5)
[1] 5.5
attr(,"class")
[1] "Integer" "numeric"
> 

by daj at July 01, 2015 07:51 PM

TheoryOverflow

Smallest possible universal combinator

I am looking for the smallest possible universal combinator, measured by the number of abstractions and applications required to specify such a combinator in the lambda calculus. Examples of universal combinators include:

  • size 23: λf.f(fS(KKKI))K
  • size 18: λf.f(fS(KK))K
  • size 14: λf.fKSK
  • size 12: λf.fS(λxyz.x)
  • size 11: λx.xSK

where S = λxyz.xz(yz) of size 6 and K = λxy.x of size 2 are the combinators of the SK combinator calculus. The first 4 examples are described in this paper.

My questions are:

  • Are there any universal combinators that are smaller in size?
  • What is the smallest possible universal combinator?

by user1667423 at July 01, 2015 07:23 PM

StackOverflow

ScalikeJdbc Multiple Insert

How do we perform multiple inserts in the same transaction?

  def insertData(dataList: List[Data])(implicit session: DBSession = autoSession) = {

    // todo: this is probably opening and closing a connection every time?
    dataList.foreach(data => insertData(data))
  }

  def insertData(data: Data) = withSQL {
    val t = DataTable.column
    insert.into(DataTable).namedValues(
      d.name -> data.name,
      d.title -> data.title
    )
  }.update().apply()

It would not be efficient to have a different transaction for every insert if these numbered in the thousands and up.

http://scalikejdbc.org/documentation/operations.html

by BAR at July 01, 2015 07:17 PM

/r/emacs

Why is Emacs unknown for many users, and how could we "market" Emacs?

I'm wondering why no one wants an editor where they could bend everything to their own preferences.

For example, an editor like Vim is somewhat obscure (same with Emacs). And Vim still manages to be more popular, despite their poor customizablity. Look at Vim reddit as example, growing fast, it have already 2.5x more users than Emacs reddit. As ex-Vim user, I know Vim is horrendous for customization/extensibility. So much respect for the plugin developers who still manages to tinker something in Vim. But I still don't understand why someone is still using Vim for years, while they could use Emacs for a while. But they still don't want to give Emacs a try.

Another example, in editors like Eclipse and IntelliJ it's far more difficult to extend something and tinkering. But I see that the many of the programmers are happy enough with it.

When I did some introspection, I guess it's because of the Lisp-oriented nature of Emacs. When I was giving EMacs a try and switching back to Vim, because I didn't understand Lisp and wouldn't not fiddle around with the weird configuration. So propably the Elisp didn't makes Emacs popular in the field of programming, where C-oriented syntax dominates (C/C++, Java, C#, PHP, Javascript, Ruby, Python and so on).

Editors like IntelliJ and Eclipse have very intelligent autocomplete, debugging tools, etc. but that's more because of the work of many users and company resources. When Emacs was really popular with hunderdthousand users, it would have really intelligent autocompletion too thanks to the many contribitions of the users, I guess.

But when I realize that the popular editors, such as Eclipse, Visual Studio and IntelliJ, RubyMine, etc are fairly young and doesn't exists for 40 years like Emacs does. Despite the young age of such editors, they're already popular and have many users. And Emacs? Still unknown for many users.

Now I'm wondering if in all these years that Emacs exists, something went wrong, that just made Emacs unpopular for the large audience? And what could we do to change that?

And sorry for my poor English by the way.

submitted by ReneFroger
[link] [87 comments]

July 01, 2015 07:09 PM

QuantOverflow

Interpretation of Correlation

I have two geometric Brownian motions (GBMs) driven by the same underlying Brownin motion, namely \begin{align*} S_t^1 = S_0^1\exp\left(\left(\mu_1 - \frac{\sigma_1^2}{2}\right)t + \sigma_1 W_t\right), \\ S_t^2 = S_0^2\exp\left(\left(\mu_2 - \frac{\sigma_2^2}{2}\right)t + \sigma_2 W_t\right). \end{align*}

The theoretical correlation between these two processes at time $t$ is $$ Corr(S_t^1, S_t^2) = \frac{\exp(\sigma_1 \sigma_2 t) - 1}{\sqrt{(\exp(\sigma_1^2t) - 1)(\exp(\sigma_2^2t) - 1)}}. $$

For example, letting $\sigma_1 = 0.15$ and $\sigma_2 = 0.1$, a plot of $Corr(S_t^1, S_t^2)$ for $0 < t \leq 10$ looks like GBM Correlation Plot

A simulation of the processes $S_t^1$ and $S_t^2$ over $0 \leq t \leq 10$ using the same $\sigma_1$ and $\sigma_2$ and letting $\mu_1 = 0.02$, $\mu_2 = 0.1$, $S_0^1 = 30$ and $S_0^2 = 40$ looks like GBM Simulation Plot

However, when I use the MATLAB function corr(S_1, S_2) I get that the correlation from this particular time series is corr(S_1, S_2) = 0.6428.

So there are these interpretations of correlation: the correlation of two random variables at a given time, given by $Corr(S_t^1, S_t^2)$, and the correlation of two time series, computed by corr(S_1, S_2). I'm trying to reconcile the difference between the two, and I'd appreciate a solid explanation!

by bcf at July 01, 2015 07:01 PM

DataTau

StackOverflow

what is the difference between HashSet and Set and when should each one

What is the difference between HashSet and Set and when should each one be used? Here's Map vs HashMap:

val hashSet = HashSet("Tomatoes", "Chilies")
val set = Set("Tomatoes", "Chilies")
set == hashSet // res: Boolean = true

by igx at July 01, 2015 06:47 PM

Lobsters

CompsciOverflow

Maximize distance between k nodes in a graph

I have an undirected unweighted graph $G$ and I want to select $k$ nodes from $G$ such that they are pairwise as far as possible from each other, in terms of geodesic distance. In other words they have to be spread around the graph as possible.

Let $d(u,v)$ be the length of a shortest path between $u$ and $v$ in $G$. Now, for a set of vertices $X \subseteq V(G)$, define $$d(X) = \sum_{\{u,v\} \subseteq X}d(u,v).$$

Let the problem SCATTERED SET be the problem which on input $G,k$ asks to find a set of $k$ vertices $X$ maximizing $d(X)$.

Is there an efficient algorithm solving SCATTERED SET?

by jbx at July 01, 2015 06:38 PM

QuantOverflow

Calculating rate of renewal for Certificate of Deposit

I am trying to calculate the rate of renewal for a large stock of Certificates of Deposit. These contracts are given on a fixed amount of time and some of them get renewed every time they reach maturity.

I thought one way I could model the situation was taking the percentage of renewal every day and then making a regression to project that number but I think its not robust enough.

by user3629666 at July 01, 2015 06:34 PM

StackOverflow

OCaml function syntax error

The following code gives an error:

let alpha = Hashtbl.create 26 in
let print_and_add a =
    print_char a;
    Hashtbl.add alpha a true;;
let str = read_line () in
String.iter (fun x -> if Hashtbl.mem alpha x=false then print_and_add x) str

What it's supposed to do:each time the function is called (with a char argument),it should print the char,and add it to the Hash table (alpha). I tried using the other syntax for functions:

let alpha = Hashtbl.create 26 in
let print_and_add = (fun a ->
    print_char a;
    Hashtbl.add alpha a true) in
let str = read_line () in
String.iter (fun x -> if Hashtbl.mem alpha x=false then print_and_add x) str

But I still want to know why the first code fails. -Thanks for any help.

by Redouane Red at July 01, 2015 06:23 PM

/r/netsec

Lobsters

QuantOverflow

Calculate the return of an equally weighted portfolio? [on hold]

This is my initial positionn. I have 40 stocks and their corresponding arithmetic return for the following month. Now I have to build an equally- weighted portfolio of these stocks and calculate its return. How do I do that?Just add up all the 40 returns and divide it by 40 afterwards? What happens when I have hte logarithmic returns of those 40 stocks?can I add up those too over the portfolio? Any help is very much aprreciated.

by vomicha at July 01, 2015 05:56 PM

StackOverflow

Set changing scala project dependency in Intellij IDEA 14 (for code completion and inspection)

I'm currently developing a multi-project Scala application in IntelliJ IDEA 14 with SBT. Projects have dependencies (dependency projects) between them and I need IntelliJ IDEA to inspect code from all projects, at least, from the projects I use from the current project (working project). I need IntelliJ IDEA to update this inspection (e.g. syntax highlighting) automatically after a change in a dependency project.

This is what I've tried:

  • I've tried to add a project as a library but the problem is that IntelliJ IDEA seems not to recognize the .scala files and packages.

  • I've also tried to set a compiled .jar of the working project as a library and set sbt in the dependency project to package the project into a .jar file every time I change a file: running ~package in the sbt console of the dependency project. But it seems that this daemon prevents IntelliJ IDEA from reading the newly created .jar file in the working project. If I package the project manually after making a change, it works properly (I get code inspection and classes are loaded properly).

  • I've tried SBT Multi-Project too but while this settings work for compiling the working project and the dependecy project, I don't get the IntelliJ IDEA code inspection that I need, everything that is not in the working project is highlighted in red. Or maybe I'm doing it wrong, here is the code (that's all I have done):

    lazy val wcommon = RootProject(file("../WCommon")) val wrepository = Project(id = "application", base = file(".")).dependsOn(wcommon)

Any suggestion?

by vicaba at July 01, 2015 05:55 PM

Unable to collect heap dump from openjdk

I am using openjdk and wanted to collect the heap dump data to analyse some performance issues, but when i trie to generate the heap dump, i get error. What could be the issue here? Any pointers would be appreciated.

 [root@localhost ~]# jmap -dump:file=heap.bin -F 1460
  Attaching to process ID 1460, please wait...
  Debugger attached successfully.
  Server compiler detected.
  JVM version is 24.79-b02
  Dumping heap to heap.bin ...


  Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sun.tools.jmap.JMap.runTool(JMap.java:197)
at sun.tools.jmap.JMap.main(JMap.java:128)
 Caused by: sun.jvm.hotspot.utilities.AssertionFailure: Expecting GenCollectedHeap, G1CollectedHeap, or ParallelScavengeHeap, but got sun.jvm.hotspot.gc_interface.CollectedHeap
at sun.jvm.hotspot.utilities.Assert.that(Assert.java:32)
at sun.jvm.hotspot.oops.ObjectHeap.collectLiveRegions(ObjectHeap.java:604)
at sun.jvm.hotspot.oops.ObjectHeap.iterate(ObjectHeap.java:244)
at sun.jvm.hotspot.utilities.AbstractHeapGraphWriter.write(AbstractHeapGraphWriter.java:51)
at sun.jvm.hotspot.utilities.HeapHprofBinWriter.write(HeapHprofBinWriter.java:416)
at sun.jvm.hotspot.tools.HeapDumper.run(HeapDumper.java:56)
at sun.jvm.hotspot.tools.Tool.start(Tool.java:221)
at sun.jvm.hotspot.tools.HeapDumper.main(HeapDumper.java:77)

by Srikanth Hugar at July 01, 2015 05:54 PM

/r/emacs

C-r (and perhaps other things) not working in Proof General + Evil

I need to use Proof General for my work, and I would like to use Vim style text editing. Evil works well for ordinary emacs stuff, but it doesn't seem to play well with Proof General. For example, if I press "u C-r" (assuming I've done something to undo), an error turns up at the bottom: "Symbol's function definition is void: redo"

I've just started using emacs and I don't really know how to customize very well. How do I diagnose and fix this? Is anything else in Evil broken in Proof General?

submitted by yagsuomynona
[link] [2 comments]

July 01, 2015 05:46 PM

StackOverflow

Best Scala imitation of Groovy's safe-dereference operator (?.)?

I would like to know what the best Scala imitation of Groovy's safe-dereference operator (?.), or at least some close alternatives are?

I've discussed it breifly on Daniel Spiewak's blog, but would like to open it up to StackOverFlow...

For the sake of everyone's time, here is Daniel's initial response, my counter, and his 2nd response:

@Antony

Actually, I looked at doing that one first. Or rather, I was trying to replicate Ragenwald’s andand “operator” from Ruby land. The problem is, this is a bit difficult to do without proxies. Consider the following expression (using Ruby’s andand, but it’s the same with Groovy’s operator):

test.andand().doSomething()

I could create an implicit conversion from Any => some type implementing the andand() method, but that’s where the magic stops. Regardless of whether the value is null or not, the doSomething() method will still execute. Since it has to execute on some target in a type-safe manner, that would require the implementation of a bytecode proxy, which would be flaky and weird (problems with annotations, final methods, constructors, etc).

A better alternative is to go back to the source of inspiration for both andand as well as Groovy’s safe dereference operator: the monadic map operation. The following is some Scala syntax which uses Option to implement the pattern:

val something: Option[String] = … // presumably could be either Some(…) or None

val length = something.map(_.length)

After this, length either be Some(str.length) (where str is the String object contained within the Option), or None. This is exactly how the safe-dereferencing operator works, except it uses null rather than a type-safe monad.

As pointed out above, we could define an implicit conversion from some type T => Option[T] and then map in that fashion, but some types already have map defined, so it wouldn’t be very useful. Alternatively, I could implement something similar to map but with a separate name, but any way it is implemented, it will rely upon a higher-order function rather than a simple chained call. It seems to be just the nature of statically typed languages (if anyone has a way around this, feel free to correct me).

Daniel Spiewak Monday, July 7, 2008 at 1:42 pm

My 2nd question:

Thanks for the response Daniel regarding ?. I think I missed it! I think I understand what you’re proposing, but what about something like this, assuming you don’t have control over the sources:

company?.getContactPerson?.getContactDetails?.getAddress?.getCity

Say it’s a java bean and you can’t go in and change the return values to Something[T] - what can we do there?

Antony Stubbs Tuesday, July 21, 2009 at 8:07 pm oh gosh - ok on re-read that’s where you’re proposing the implicit conversion from T to Option[T] right? But would you still be able to chain it together like that? You’d still need the map right? hmm….

var city = company.map(_.getContactPerson.map(_.getContactDetails.map(_.getAddress.map(_.getCity))))

?

Antony Stubbs Tuesday, July 21, 2009 at 8:10 pm

His 2nd response:

@Antony

We can’t really do much of anything in the case of company?.getContactPerson, etc… Even assuming this were valid Scala syntax, we

by Antony Stubbs at July 01, 2015 05:33 PM

/r/netsec

Planet Theory

Popularizing TOC

It is hard to overestimate the impact of Popular Science books such as “A Brief History of Time” and “Chaos: Making a New Science” on Scientific Research. The indirect impact of popularizing Science and Scientific Education often surpass the direct contribution that most scientists can hope to achieve in their life time. For this reason, many of the greatest scientists (including in our field) choose to invest considerable time in this blessed endeavor. I personally believe that the Theory of Computing deserves more popularization than it gets (and I hope to someday contribute my share). Nevertheless, this post is meant as a tribute to our colleagues who already made wonderful such contributions. I will continuously edit this post with TOC popular books and educational resources (based on my own knowledge and suggestions in the comments).

Popular TOC books:

Scott Aaronson, Quantum Computing since Democritus

Martin Davis, Engines of Logic: Mathematicians and the Origin of the Computer

David Harel, Computers Ltd.: What They Really Can’t Do

David Harel with Yishai Feldman, Algorithmics: The Spirit of Computing

Douglas Hofstadter: Gödel, Escher, Bach: An Eternal Golden Braid

Lance Fortnow, The Golden Ticket: P, NP, and the Search for the Impossible

Dennis Shasha and Cathy Lazere, Out of their Minds: The Lives and Discoveries of 15 Great Computer Scientists

Less Valiant, Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World

Leslie Valiant, Circuits of the Mind

Noson S. Yanofsky, The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us

Hector Zenil, Randomness Through Computation: Some Answers, More Questions

Other Resources:

CS Unplugged (including a book)


by Omer Reingold at July 01, 2015 05:28 PM

Lobsters

StackOverflow

Configuration depending on launch mode

Play can be launched in dev mode (via run), in production mode (via start) or in test mode. Is there a way to provide a different config file (conf/application.conf) depending on which mode it is launched in?

by Reactormonk at July 01, 2015 05:27 PM

Insert if not exists in Slick 3.0.0

I'm trying to insert if not exists, I found this post for 1.0.1, 2.0.

I found snippet using transactionally in the docs of 3.0.0

val a = (for {
  ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
  _ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally

val f: Future[Unit] = db.run(a)

I'm struggling to write the logic from insert if not exists with this structure. I'm new to Slick and have little experience with Scala. This is my attempt to do insert if not exists outside the transaction...

val result: Future[Boolean] = db.run(products.filter(_.name==="foo").exists.result)
result.map { exists =>  
  if (!exists) {
    products += Product(
      None,
      productName,
      productPrice
    ) 
  }  
}

But how do I put this in the transactionally block? This is the furthest I can go:

val a = (for {
  exists <- products.filter(_.name==="foo").exists.result
  //???  
//    _ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally

Thanks in advance

by Ixx at July 01, 2015 05:22 PM

Lobsters

/r/clojure

CompsciOverflow

Push down automata what to do when there is no suitable transition

This is a question that has emerged from a recent quiz I have taken. In short

Consider the following transitions on a push down automaton. Assume the starting state is q. Which one of the following inputs will lead the automaton to state p with the stack containing the string XXZ?

I am gonna write only the transitions from state q regardless of the input symbol:

 (q, 0, Z) -> (q, XZ)
 (q, 0, X) -> (q, XX)
 (q, 1, X) -> (q, X)
 (q, ε, Χ) -> (p, ε)

Now the thing we note here is that there is no transition from state q when there is an empty stack regardless of the input symbol. How does one answer that question assuming we start from an empty stack? I have omitted the inputs cause I don't really care about the answer to the quiz, all I want to know is how would someone start working on that?

by Theocharis K. at July 01, 2015 05:09 PM

StackOverflow

ALerting in Riemann?

I am using ELK (logstash, ES, Kibana) stack for log analysis and Riemann for alerting. I have logs in which users is one of the fields parsed by logstash and I send the events to riemann from riemann output plugin.

Logstash parses logs and user is one of the field. Eg: logs parsed

Timestamp              user     command-name
 2014-06-07...         root      sh ./scripts/abc.sh
 2014-06-08...         sid       sh ./scripts/xyz.sh
 2014-06-08...         abc       sh ./scripts/xyz.sh
 2014-06-09...         root      sh ./scripts/xyz.sh

Logstash:

riemann {
    riemann_event => {
        "service"     => "logins"
        "unique_user" => "%{user}"
    }
}

So users values will be like: root, sid, abc, root, sid, def, etc....

So I split stream by user i.e one stream for each unique user. Now, I want to alert when number of unique users count go more than 3. I wrote the following but it's not achieving my purpose.

Riemann:

(streams

 (where (service "logins")
  (by :unique_user
    (moving-time-window 3600 
     (smap (fn [events]
      (let
        [users (count events)]
         (if (> users 3)
          (email "abc@gmail.com")       
     ))))))))

I am new to Riemann and clojure. Any help is appreciated.

by Siddharth Trikha at July 01, 2015 04:55 PM

PowerMock not able to resolve ambiguous reference

I am trying to test a simple application in Scala , and test it with PowerMock.

Below is my code

Service.scala

trait Service {
    def getName(): String
    def start(): Int
}

ServiceListener.scala

trait ServiceListener {
  def onSuccess(service: Service): Unit
  def onFailure(service: Service): Unit
}

SomeSystem.scala

import java.util
import java.util.List
import SomeSystem._

import scala.collection.JavaConversions._

object SomeSystem {

  def notifyServiceListener(serviceListener: ServiceListener, service: Service, success: Boolean) {
    if (serviceListener != null) {
      if (success) {
        serviceListener.onSuccess(service)
      } else {
        serviceListener.onFailure(service)
      }
    }
  }

  def startServiceStaticWay(service: Service): Int = {
    val returnCode = service.start()
    returnCode
  }
}

class SomeSystem {

  private val services: List[Service] = new util.ArrayList[Service]()
  private var serviceListener: ServiceListener = _
  private val events: List[String] = new util.ArrayList[String]()

  def start() {
    for (service <- services) {
      val something = startServiceStaticWay(service)
      val success = something > 0
      notifyServiceListener(serviceListener, service, success)
      addEvent(service, success)
    }
  }

  private def addEvent(service: Service, success: Boolean) {
    events.add(getEvent(service.getName, success))
  }

  private def getEvent(serviceName: String, success: Boolean): String = {
    serviceName + (if (success) "started" else "failed")
  }

  def add(someService: Service) {
    services.add(someService)
  }

  def setServiceListener(serviceListener: ServiceListener) {
    this.serviceListener = serviceListener
  }
}

I am trying to unit test SomeSystem.scala as below

import {ServiceListener, SomeSystem, Service}
import org.junit.Before
import org.junit.Test
import org.junit.runner.RunWith
import org.mockito.Mockito
import org.powermock.api.mockito.PowerMockito
import org.powermock.modules.junit4.PowerMockRunner
//remove if not needed
import scala.collection.JavaConversions._

@RunWith(classOf[PowerMockRunner])
class PowerMockitoIntegrationTest {
  private var service: Service = _
  private var system: SomeSystem = _
  private var serviceListener: ServiceListener = _

  @Before
  def setupMock() {
    service = Mockito.mock(classOf[Service])
    serviceListener = Mockito.mock(classOf[ServiceListener])
    system = Mockito.spy(new SomeSystem())
    system.add(service)
    system.setServiceListener(serviceListener)
  }

  @Test
  def startSystem() {
    p("Stub using PowerMockito. service.start() should return 1 as we want start of the service to be successful")
    PowerMockito.when(service.start()).thenReturn(1)
    p("Start the system, should start the services in turn")
    system.start()
    p("Verify using Mockito that service started successfuly")
    Mockito.verify(serviceListener).onSuccess(service)
    p("Verifed. Service started successfully")
  }

  private def p(s: String) {
    println(s)
  }
}

Unfortunately I am getting the below compilation error , I am confused why it is appearing and any way we could do get rid of it.

[ERROR] C:\IntellJWorkspace\PowerMockProblem\src\test\scala\PowerMockitoIntegrationTest.scala:29: error: ambiguous reference to overloaded definition,
[ERROR] both method when in object PowerMockito of type [T](x$1: T)org.mockito.stubbing.OngoingStubbing[T]
[ERROR] and  method when in object PowerMockito of type [T](x$1: Any, x$2: Object*)org.mockito.stubbing.OngoingStubbing[T]
[ERROR] match argument types (Int)
[ERROR]     PowerMockito.when(service.start()).thenReturn(1)

by Stacker1234 at July 01, 2015 04:55 PM

Do the mutable Collection.empty methods violate Scala's zero-argument naming convention?

This is how the .empty method is declared in the scala.collection.mutable.Map object in Scala 11.5:

def empty[A, B]: Map[A, B]

Shouldn't this method have empty parentheses, like this?

def empty[A, B](): Map[A, B]

The page on Scala's naming conventions suggests, without saying it explicitly, that omitting the parentheses on a 0-arity method is the convention for pure-functional code, and including empty parentheses means that the method has a side-effect. (I think I've run into an error message that's more explicit about this.)

The mutable .empty method has a side-effect, since you can distinguish the results of separate calls to .empty. Shouldn't it get empty parentheses, even though its mate in immutable.Map doesn't?

Regarding my own code, is there a special naming convention I should follow when creating and returning a mutable object from a 0-arity method?

by Ben Kovitz at July 01, 2015 04:52 PM

/r/dependent_types

StackOverflow

Akka http handler json validation

I am very new to Scala and Akka. I am trying to write a simple Http handler using akka which receives a json. I would like to marshal this json to a scala class/object for processing. Since it is an input, I would also like to perform basic validation on the required json nodes and the types of values too. I found that I have to use spray-json for it. But I am unable to find more information on how to do this, I am looking for samples/templates on this. Any help would be highly appreciated.

by g0c00l.g33k at July 01, 2015 04:46 PM

QuantOverflow

Is it possible to use price differences for portfolio optimization in FX markets?

I'm supposed to compute optimal asset allocation for many different FX strategies on different currency pairs. I was given only data containing closing time and profit/loss in number of ticks for every trade for each strategy and currency pair. So for example lines in the file for GBP/USD strategy look like this:

  • 2010.01.01 00:56:08;169
  • 2010.01.01 00:56:08;-122

(169 * 0.00001 = 0.00169 price difference)

The same price difference means the same net profit or loss at various price levels of the currency pair. Of course this is different than returns for those price moves, i.e. (Pt - Pt-1) / Pt

  • 1.50000 -> 1.50169: 0.00169 price difference, 0.001126667 return
  • 1.60000 -> 1.60169: 0.00169 price difference, 0.00105625 return

My questions are:

  • can this kind of data be used for portfolio optimization in general?
  • can I use these data somehow with optimize.portfolio() function in R package 'PortfolioAnalytics', which according to its documentation expects asset returns as an input, and still expect to get any meaningful results?
  • should I rather request more complete data with open and close prices?

Sorry if the question is too elementary, I'm a beginner at this. And thank you for any replies.

by bftrading at July 01, 2015 04:45 PM

TheoryOverflow

is every L in pspace-complete is nl hard? if yes then why? [on hold]

is every L in pspace-complete is nl hard? if yes then why? if not then why cant there be L that is pspace complete and in NL?

by user34613 at July 01, 2015 04:29 PM

Planet Clojure

Briefly, Dynamo Streams + core.async

Introduction To Streams

Dynamo Streams is an AWS service which allows Dynamo item writes (inserts, modifications, deletions) to be accessed as per-table streams of data. It’s currently in preview-only mode, however there’s a version of DynamoDB Local which implements the API.

The interface is basically a subset of Amazon’s Kinesis stream-consumption API, and there’s an adaptor library which allows applications to consume Dynamo streams as if they were originating from Kinesis. In additon to direct/pull consumption, Dynamo streams can be associated with AWS Lambda functions.

From Clojure

Support for streams is included in the recently-minted 0.3.0 version of Hildebrand, a Clojure DynamoDB client (covered in giddy detail in a previous post). Consider all of the details provisional, given the preview nature of the API.

The operations below assume a table exists with streams enabled, with both new and old images included (i.e. before-and-after snapshots, for updates). Creating one would be accomplished like so, with Hildebrand:

(hildebrand/create-table!
  {:secret-key ... :access-key ...
   :endpoint "http://localhost:8000"}
  {:table :stream-me
   :attrs {:name :string}
   :keys [:name]
   ...
   :stream-specification
   {:stream-enabled true
    :stream-view-type :new-and-old-images}})

Note we’re pointing the client at a local Dynamo instance. Now, let’s listen to any updates:

(ns ...
  (:require [clojure.core.async :refer [<! go]]
            [hildebrand.streams :refer
             [latest-stream-id! describe-stream!]]
            [hildebrand.streams.page :refer [get-records!]]))

(defn read! []
  (go
    (let [stream-id (<! (latest-stream-id! creds :stream-me))
          shard-id  (-> (describe-stream! creds stream-id)
                        <! :shards last :shard-id)
          stream    (get-records! creds stream-id shard-id
                      :latest {:limit 100})]
      (loop []
        (when-let [rec (<! stream)]
          (println rec)
          (recur))))))

We retrieve the latest stream ID for our table, and then the identifier of the last shard for that stream. The streams documentation isn’t forthcoming on the details of how and when streams are partitioned into shards - we’re only interested in the most recent items, so this logic will do for a demo.

get-records! is the only non-obvious function above - it continually fetches updates (limit at a time) from the shard using an internal iterator. Updates are placed on the output channel (with a buffer of limit) as vectors tagged with either :insert, :modify or :remove.

:latest is the iterator type - per Dynamo, the other options are :trim-horizon, :at-sequence-number and :from-sequence-number. For the latter two, a sequence number can be provided under the :sequence-number key in the final argument to get-records!

Let’s write some items to our table:

(defn write! []
  (async/go-loop [i 0]
    (<! (put-item! creds :stream-me {:name "test" :count i}))
    (<! (update-item!
         creds :stream-me {:name "test"} {:count [:inc 1]}))
    (<! (delete-item! creds :stream-me {:name "test"}))
    (recur (inc i))))

Running these two functions concurrently, we’d see this output:

[:insert {:name "test" :count 0}]
[:modify {:name "test" :count 0} {:name "test" :count 1}]
[:delete {:name "test" :count 1}]
...

The sequence numbers of the updates are available in the metadata of each of the vectors.

by Nervous Systems at July 01, 2015 04:28 PM

/r/dependent_types

StackOverflow

Ansible: can't access dictionary value - got error: 'dict object' has no attribute

---
- hosts: test
  tasks:
    - name: print phone details
      debug: msg="user {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})"
      with_dict: users
  vars:
    users:
      alice: "Alice"
      telephone: 123

When I run this playbook, I am getting this error:

One or more undefined variables: 'dict object' has no attribute 'name' 

This one actually works just fine:

debug: msg="user {{ item.key }} is {{ item.value }}"

What am I missing?

by user1692261 at July 01, 2015 04:07 PM

Is there a convenient helper in Play 2.4.x to build a uri from play.api.mvc.Request.queryString

I would have thought that copy on the Request with an updated queryString would have reset the URI however according to the code it's nothing more than brain dead vals.

https://github.com/playframework/playframework/blob/2.4.x/framework/src/play/src/main/scala/play/api/mvc/Http.scala

Something somewhere is likely to build the URI from such a Map[String -> Seq[String]] -- anyone know where that might be?

Much as I keep trying NOT to write code sadly I keep running into reasons to...

Most likely trivial EXCEPT there are always those stupid corner cases, languages, special characters, encoding and a host of other potential unknowns and if someone is already wearing those scars with pride I would prefer to honour their work by using it.

by Techmag at July 01, 2015 04:07 PM

Why are Arrays invariant, but Lists covariant?

E.g. why does

val list:List[Any] = List[Int](1,2,3)

work, but

val arr:Array[Any] = Array[Int](1,2,3)

fails (because arrays are invariant). What is the desired effect behind this design decision?

by fresskoma at July 01, 2015 04:06 PM

Is it possible to use IN clause in plain sql Slick for integers?

There is a similar question here but it doesn't actually answer the question.

Is it possible to use IN clause in plain sql Slick?

Note that this is actually part of a larger and more complex query, so I do need to use plain sql instead of slick's lifted embedding. Something like the following will be good:

val ids = List(2,4,9)
sql"SELECT * FROM coffee WHERE id IN ($ids)"

by Roy Lin at July 01, 2015 04:05 PM

QuantOverflow

How can one get broker order data?

Is there any chance to get order data from any broker with a label from which account it came from?


The accounts can be anonymized, i just need to identify an account's orders sent.

Any help will be appreciated.

by Thomas Pazur at July 01, 2015 04:04 PM

How google finance calculates beta of a stock

How google finance calculates beta of a stock - What is the proxy for the market? - What is the time period it uses for regression?

by Sriwantha Attanayake at July 01, 2015 04:01 PM

TheoryOverflow

What is the computational complexity of sin and cos for floating point inputs?

What is the computational complexity of the problem


INPUT: $\;\;\;$ integers $x$ and $y \:$ (both in binary)

OUTPUT:
rational approximations to $\: \sin\hspace{-0.03 in}\left(x\hspace{-0.04 in}\cdot \hspace{-0.04 in}2^{\hspace{.02 in}y}\hspace{-0.02 in}\right) \:$ and $\: \cos\hspace{-0.03 in}\left(x\hspace{-0.04 in}\cdot \hspace{-0.04 in}2^{\hspace{.02 in}y}\hspace{-0.02 in}\right)$
whose absolute errors are each less than $1/\hspace{-0.04 in}\left(2^{\hspace{.02 in}\operatorname{length}(x)}\hspace{-0.05 in}\right)$


?


One could just expand $\: x\hspace{-0.04 in}\cdot \hspace{-0.04 in}2^{\hspace{.02 in}y} \:$, $\:$ but that would use at least $\:\left|\hspace{.03 in}y\right|\:$ bits of space.
(I believe most work on the complexity of such functions
only considers their restrictions to compact intervals.)

by Ricky Demer at July 01, 2015 03:52 PM

StackOverflow

Proving equivalence of nested path dependent types members

This is a simplified case, and I am totally open to a different/better way to achieve this

trait Container {
  type T
  def data: List[T]
}

trait Transform {
  val from: Container
  val to: Container
  def xform: from.T => to.T
}

case class Identity(c: Container) extends Transform {
  val from = c
  val to = c
  def xform = { t: from.T => t }
}

This yields the predictable error of:

<console>:12: error: type mismatch;
 found   : t.type (with underlying type Identity.this.from.T)
 required: Identity.this.to.T
         def xform = { t: from.T => t }

The goal is basically to have a transform which transforms objects underlying the container, but to be able to convince the type checker (without horrible horrible casts all over the place) that the types are the same.

What is the best way to be able to show equivalences and relationships of types in this way?

Like I said, totally open to restructuring the code and I promise in the actual example it is for a real purpose :)

by A Question Asker at July 01, 2015 03:51 PM

CompsciOverflow

Counterexample to the converse of the Pumping Lemma

A discussion here reminded me of a question I've had for a while. Define a predicate $Q(L)$ on languages in what should be a familiar form:

$Q(L)$ = There exists an integer $p>0$ such that for all $w\in L$ with length $|w|\ge p$, $w$ can be factored as $w=xyz$ with (1) $|y|>0$, (2) $|xy|\le p$, (3) for all $i = 0, 1, \dotsc$, we have $xy^iz\in L$.

Of course, the pumping lemma for regular languages asserts that if $L$ is a regular language, then $Q(L)$ is true. The converse isn't true, but while it seems like finding a counterexample should be a natural question, it doesn't seem to be in any of the yard or so of texts on my shelf. In other words, is there some canonical answer to the following that I've simply missed?

Find a non-regular language $L$ that satisfies $Q(L)$.

by Rick Decker at July 01, 2015 03:47 PM

StackOverflow

Highland.js for CSV parsing

I'm trying to write a very functional manner. We're using Highland.js for managing the stream processing, however because I'm so new I think I'm getting really confused with how I can deal with this unique situation.

The issue here is that all the data in the file stream is not consistent. The first line in a file is typically the header, which we want to store into memory and zip all rows in the stream afterwards.

Here's my first go at it:

var _      = require('highland');
var fs     = require('fs');
var stream = fs.createReadStream('./data/gigfile.txt');
var output = fs.createWriteStream('output.txt');

var headers = [];

var through = _.pipeline(
    _.split(),
    _.head(),
    _.doto(function(col) {
        headers = col.split(',');
        return headers;
    }),

    ......

    _.splitBy(','),
    _.zip(headers),
    _.wrapCallback(process)
);

_(stream)
    .pipe(through)
    .pipe(output);

The first command in the pipeline is to split the files by lines. The next grabs the header and the doto declares it as a global variable. The problem is the next few lines in the stream don't exist and so the process is blocked...likely because the head() command above it.

I've tried a few other variations but I feel this example give you a sense of where I need to go with it.

Any guidance on this would be helpful -- it also brings up the question of if I have different values in each of my rows how can I splinter the process stream amongst a number of different stream operations of variable length/complexity.

Thanks.

EDIT: I've produced a better result but I'm questioning the efficiency of it -- is there a way I can optimize this so on every run I'm not checking if the headers are recorded? This still feels sloppy.

var through = _.pipeline(
    _.split(),
    _.filter(function(row) {
        // Filter out bogus values
        if (! row || headers) {
            return true;
        }
        headers = row.split(',');
        return false;
    }),
    _.map(function(row) {
        return row.split(',')
    }),
    _.batch(500),
    _.compact(),
    _.map(function(row) {
        return JSON.stringify(row) + "\n";
    })
);

_(stream)
    .pipe(through)

by ddibiase at July 01, 2015 03:46 PM

How can I functionally iterate over a collection combining elements?

I have a sequence of values of type A that I want to transform to a sequence of type B.

Some of the elements with type A can be converted to a B, however some other elements need to be combined with the immediately previous element to produce a B.

I see as a small state machine with two states, the first one handling the transformation from A to B when just the current A is needed, or saving A if the next row is needed and going to the second state; the second state combining the saved A with the new A to produce a B and then go back to state 1.

I'm trying to use scalaz's Iteratees but I fear I'm overcomplicating it, and I'm forced to return a dummy B when the input has reached EOF.

What's the most elegant solution to do it?

by user180940 at July 01, 2015 03:41 PM

/r/clojure

Lobsters

Endless: An iOS web browser with a focus on privacy and security - now in the App Store

When I switched again from Android to iOS, I had to give up using Firefox and its add-ons like HTTPS-Everywhere, Disconnect, and Self-Destructing Cookies.

I started making a browser for iOS (as a wrapper around UIWebView, of course) that had these things built-in, and I have been using it as my primary browser on my phone since last year.

It’s BSD-licensed and the source code is on GitHub, though I figured it would be easier to get people to use it and contribute fixes if it were easily downloadable, so it is now available for free in the App Store.

Comments

by jcs at July 01, 2015 03:22 PM

StackOverflow

FreeBSD : Unable to start apache

I installed the apache24 via pkg installer

pkg install apache24

and added the following line apache24_enable="YES" to the /etc/rc.conf file

now I am trying to start the apache with service apache24 start

and it displays the following error

apache24 does not exist in /etc/rc.d or the local startup directories(/usr/local/etc/rc.d)

How can I start apache

by DharanBro at July 01, 2015 03:13 PM

CompsciOverflow

How is the micro code executed within a processor?

How does the microprocessor convert the machine code to micro code? What part of the processor is at play?

by Kraken at July 01, 2015 03:11 PM

TheoryOverflow

Is the nonnegativeness of a polynomial hard for $\mathsf{NP}_\mathbb{R}$?

It is clear that the following problem is in $\mathsf{NP}_\mathbb{R}$.

Input: a list $P$ of triplets $(a,s,t)$ where $s$ and $t$ are nonnegative integers.
Output: is there an $x\in \mathbb{R}$ such that $$p(x) =\sum_{(a,s,t) \in P} a x^s(1-x)^t \geq 0?$$

Is this problem $\mathsf{NP}_\mathbb{R}$-hard?

by user34585 at July 01, 2015 03:03 PM

/r/netsec

/r/compsci

StackOverflow

Count filtered records in scala

As I am new to scala ,This problem might look very basic to all..
I have a file called data.txt which contains like below:

xxx.lss.yyy23.com-->mailuogwprd23.lss.com,Hub,12689,14.98904563,1549
xxx.lss.yyy33.com-->mailusrhubprd33.lss.com,Outbound,72996,1.673717588,1949
xxx.lss.yyy33.com-->mailuogwprd33.lss.com,Hub,12133,14.9381027,664
xxx.lss.yyy53.com-->mailusrhubprd53.lss.com,Outbound,72996,1.673717588,3071

I want to split the line and find the records depending upon the numbers in xxx.lss.yyy23.com

 val data = io.Source.fromFile("data.txt").getLines().map { x => (x.split("-->"))}.map { r => r(0) }.mkString("\n")  

which gives me

xxx.lss.yyy23.com
xxx.lss.yyy33.com
xxx.lss.yyy33.com
xxx.lss.yyy53.com  

This is what I am trying to count the exact value...

 data.count { x => x.contains("33")}  

How do I get the count of records who does not contain 33...

by Aman at July 01, 2015 02:42 PM

/r/netsec

UnixOverflow

Install a package and all its dependencies without a confirmation prompt with FreeBSD pkg

Is there a way to automatically install packages and their dependencies, like with apt-get -y in Debian, without being prompted each and every time?

Installing Webmin, NGiNX and nano in 1 step, all their dependancies automatically:

sudo apt-get -y install webmin nginx nano

On FreeBSD 10, (I'm still getting used to Ports) I would type:

pkg install webmin nginx nano

If I append the -y switch, it just fails. I looked at the documentation at meBSD and FreeBSD Handbook and there doesn't seem to be an option/switch to use. Any ideas anyone?

by Danijel J at July 01, 2015 02:28 PM

StackOverflow

Zip arrays with MongoDB

Is it possible to zip arrays within a mongo document? I mean the functional programming definition of zip, where corresponding items are paired into tuples.

To be more precise, I would like start with a mongo document like this:

{
    "A" : ["A1", "A2", "A3"],
    "B" : ["B1", "B2", "B3"],
    "C" : [100.0, 200.0, 300.0]
}

and end up with mongo documents like these:

{"A":"A1","B":"B1","C":100.0},
{"A":"A2","B":"B2","C":200.0},
{"A":"A3","B":"B3","C":300.0},

Ideally this would use the aggregation framework, which I am already using to get my documents to this stage.

by AnotherDayAnotherRob at July 01, 2015 02:27 PM

Can map/flatMap sequence with several paths be translated a for-comprehension?

I think it's not possible but maybe I'm missing something?

Here is an example

val sentences = "hello.what is your name?I am Loïc"     

sentences.split("\\.|\\?").flatMap { sentence =>
  if (sentence.startsWith("h")) sentence.split(" ").map(word => word.toUpperCase)
  else if (sentence.startsWith("w")) sentence.split(" ").flatMap(word => List(word.toUpperCase, "-").map(word => word.toUpperCase))
  else List(sentence)
}

Can this be translated to a for-comprehension expression?

I noticed that I have this kind of pattern quite often when I'm using map/flatMap on futures (for example on webservice calls), if I need to pattern-match on the service response. So I'm trying to improve the readability.

Thanks :)

by Loic at July 01, 2015 02:18 PM

CompsciOverflow

Delaunay Triangulation on Convex Polytopes — Uniform Sampling

My goal is to uniformly sample from a convex polytope. I know that for the simpler case, where I have to uniformly sample from a simplex, I can use Bayesian Bootstrap, discussed in these posts:

Uniform sampling from a simplex

Random vectors uniformely distributed into convex n-polytope

Therefore, I'm very interested in this approach. But I don't really know how to use Delaunay Triangulation here. What I have is a linear equation Ax = b and a non-negativity constraint that $x \geq \vec{0}$, and I want to sample x uniformly. Can someone tell me how to do the Delaunay Triangulation here? Thanks in advance!

by Miller Zhu at July 01, 2015 02:08 PM

Fefe

Hörempfehlung: Max Uthoff über Armut und Menschenbilder. ...

Hörempfehlung: Max Uthoff über Armut und Menschenbilder. Die Zeit solltet ihr euch alle nehmen. Am besten jetzt.

Update: Das ist Teil dieser CD, die sich, wie mir gerade jemand per Mail mitteilt, insgesamt auf ähnlich hohem Niveau bewegen soll.

July 01, 2015 02:01 PM

QuantOverflow

Calibration Merton Jump-Diffusion

Consider the following SDE $dV_t = rV_tdt +\sigma V_t dW_t + dJ_t$

where $J_t$ is a Compound poisson process with log-Normal jump size $Y_i$.

How am I supposed to calibrate this model to CDS spreads? The problem of course is there doesn't exist an analytical formula for the survival probability function...

[EDIT] Well, what I'd need is in fact the distribution of the first hitting time, that is

$\tau = \inf\{t>0 : V_t = x\}$

where x is some barrier $\in R$

$Pr\left\{V_0 e^{(r-(1/2) \sigma^2)t + \sigma W_t + \sum_{i=0}^{N(t)} Y_i} = x \right\} =\\Pr \left\{(r-(1/2)\sigma^2)t + \sigma W_t + \sum_{i=0}^{N(t)}Y_i =\ln(x/V_0) \right\} = \\ Pr\left\{\sigma W_t + \sum_{i=0}^{N(t)}Y_i =\ln(x/V_0) - (r-(1/2)\sigma^2)t \right\}$

The problem is here...I don't know which distribution comes out in the left hand side

by Vittorio Apicella at July 01, 2015 01:59 PM

Lobsters

StackOverflow

ReactiveMongo: Projection element not return using reactive mongo query

i have following mongodb document:

{
"_id" : ObjectId("5592c0e6ea16e552ac90e169"),
-----------
"location" : {
    "_id" : ObjectId("5592c17fc3ad8cbffa0e9778"),
    "companyFieldId" : ObjectId("559140f1ea16e552ac90e058"),
    "name" : "Location",
    "alias" : "Points",
    "locations" : [ 
        {
            "_id" : ObjectId("5592c17fc3ad8cbffa0e9779"),
            "country" : "India",
            "state" : "Punjab",
            "city" : "Moga",
            "zip" : "142001",
            "custom" : false
        }, 
        {
            "_id" : ObjectId("5592c17fc3ad8cbffa0e977a"),
            "country" : "India da address",
            "custom" : true
        }
    ],
    "mandatory" : true,
    "allowForGroups" : false
},
-----------
}

When i query the document using following query:

companyCollection.find($doc("_id" $eq companyId, "location._id" $eq locationId)).projection($doc("location" -> 1, "_id" -> 1)).cursor[LocationField].headOption;

It will return only company id. But when i change the projection value to projection($doc("location" -> 1, "_id" -> 0)) it return empty document. I am using Query DSL for write the queries.

UPDATE

When i create query like:

companyCollection.find($doc("_id" $eq companyId, "department._id" $eq departmentId), $doc("department" -> 1, "_id" -> 0)).cursor[Company].headOption 

with this my return value is map with Company and with its property LocationField using projection and rest of the fields are ignore by mongodb. But my basic requirement is return only location inner document value and map with LocationField. When i run query in mongo db console like :

db.companies.find({"_id": ObjectId('5592c0e6ea16e552ac90e169'), "location._id": ObjectId('5592c17fc3ad8cbffa0e9778')}, {"location": 1, "_id": 0})

The result behavior is same as my reactive mongo query. Is it possible with mongo db for return only inner document, instead of full document structure?

by Harmeet Singh Taara at July 01, 2015 01:48 PM

Planet Emacsen

Irreal: org-goto

I learned a really useful Org mode command from Artur Malabarba's first anniversary post: org-goto. The idea is that you want to navigate to somewhere else in the current Org buffer. You could fold the buffer and navigate to the proper subtree but often you want to leave the current subtree unfolded as well as the subtree you're navigating to. The org-goto command, bound to 【Ctrl+c Ctrl+j】 by default, allows you to do just that. An copy of the buffer is created with an overlay displaying the buffer in overview mode. You can navigate within that buffer and press 【Return】 on the desired subtree which will take you back to the original buffer with the point at the new subtree.

That may not seem too useful but see Malabarba's post for a compelling use case. In my case, it's useful because of the large Org files I use to store my data. When I load such a file via a bookmark or it gets reloaded during a sync from git, it will be completely folded. The files are too large to set them to be completely unfolded so I have to have some way of finding the proper place. I used to call org-sparse-tree and then do a regular expression search for the proper heading. With org-goto, I can simply display the headers, navigate to the desired one, and press 【Return】 to navigate to the proper heading that will be conveniently unfolded. Very handy.

It's a bit hard to explain how org-goto works so you should experiment with it a bit to see how you can fold it into your workflow. If you use Org files to organize your data as I do, this command is very likely to be a win for you.

UPDATE: There's a really good discussion of org-goto and related navigation methods in the comments. Be sure to take a look.

by jcs at July 01, 2015 01:43 PM

StackOverflow

Clojure Function Literals

I am doing Intro to Functions problem, but I don't quite understand what is going on? How are the 4 expressions below different? If they are all the same, why have 4 different syntaxes?

(partial + 5)
#(+ % 5)
(fn [x] (+ x 5))
(fn add-five [x] (+ x 5))

by user1259898 at July 01, 2015 01:39 PM

CompsciOverflow

Why is Comb-sort (aka Dobosiewicz-sort) faster than Cocktail-sort (aka shake-sort)?

According to wikipedia, Cocktail-sort has an average performance of $O(n^2)$, whereas Comb-sort's average performance is $Ω(n^2/2^p)$, where $p$ is the number of increments.

There's no explanation given for this substantial difference between two similar algorithms, both variants of Bubble-sort. Can anyone help explain it?

by Dun Peal at July 01, 2015 01:39 PM

StackOverflow

Why am I getting java.lang.VerifyError when enhancing ebean scala entity

I am new to ebean and scala/akka and am trying to persist a minimal case class as an ebean entity. My dependencies are as follows:

scalaVersion := "2.11.5"

libraryDependencies ++= Seq(
  "com.typesafe.akka" %% "akka-actor" % "2.3.9",
  "com.typesafe.akka" %% "akka-remote" % "2.3.9",
  "com.typesafe.akka" %% "akka-cluster" % "2.3.9",
  "com.typesafe.akka" %% "akka-testkit" % "2.3.9" % "test",
  "org.scalatest" %% "scalatest" % "2.2.4" % "test",
  "com.typesafe.slick" %% "slick" % "3.0.0",
  "com.typesafe.slick" %% "slick-codegen" % "3.0.0",
  "org.sorm-framework" % "sorm" % "0.3.18",
  "org.avaje.ebeanorm" % "avaje-ebeanorm" % "4.7.1",
  "org.avaje.ebeanorm" % "avaje-ebeanorm-agent" % "4.5.3",
  "org.avaje" % "avaje-agentloader" % "1.1.3",
  "org.apache.commons" % "commons-pool2" % "2.0")

and my entity:

package zw.co.esol.eswitch.model

import javax.persistence.Entity
import javax.persistence.Id
import com.avaje.ebean.Model

@Entity
case class Test(@Id id: Long, firstname: String) extends Model

and my main:

object ApplicationMain extends App {

  val system = ActorSystem("eswitch", ConfigFactory.load())

  //system.logConfiguration()

    // Load the agent into the running JVM process
    if (!AgentLoader.loadAgentFromClasspath("avaje-ebeanorm-agent","debug=1;packages=zw.co.esol.eswitch.model.*")) {
      println("avaje-ebeanorm-agent not found in classpath - not dynamically loaded");
    }

    EbeanDbServer.init
    //more code follows.......

and my ebean init code:

object EbeanDbServer {

  def init = {

    // programmatically build a EbeanServer instance  
    // specify the configuration...  

    println("@@ Starting EbeanServer...")

    var config = new ServerConfig();  
    config.setName("pgtest");  

    // Define DataSource parameters  
    var postgresDb = new DataSourceConfig();  
    postgresDb.setDriver("com.mysql.jdbc.Driver");  
    postgresDb.setUsername("user");  
    postgresDb.setPassword("password");  
    postgresDb.setUrl("jdbc:mysql://127.0.0.1:3306/eswitch");  
    postgresDb.setHeartbeatSql("select count(*) from message");  

    config.setDataSourceConfig(postgresDb);  

    // specify a JNDI DataSource   
    // config.setDataSourceJndiName("someJndiDataSourceName");  

    // set DDL options...  
    config.setDdlGenerate(false);  
    config.setDdlRun(false);  

    config.setDefaultServer(false);  
    config.setRegister(false);  

    var test = Test(1, "stan")
    // automatically determine the DatabasePlatform  
    // using the jdbc driver   
    // config.setDatabasePlatform(new PostgresPlatform());  

    // specify the entity classes (and listeners etc)  
    // ... if these are not specified Ebean will search  
    // ... the classpath looking for entity classes.  

      config.addClass(classOf[Test]);  

    // specify jars to search for entity beans  
 //   config.addJar("someJarThatContainsEntityBeans.jar");  

    // create the EbeanServer instance  
    val server: EbeanServer = EbeanServerFactory.create(config);

    println("@@ EbeanServer started... : " + server.getName)

  }
}

But I get the output when running the app:

ebean-enhance> cls: zw/co/esol/eswitch/model/Test  msg: ... skipping add equals() ... already has equals() hashcode() methods
ebean-enhance> cls: zw/co/esol/eswitch/model/Test  msg: enhanced 
[error] (run-main-0) java.lang.VerifyError: Bad type on operand stack
[error] Exception Details:
[error]   Location:
[error]     zw/co/esol/eswitch/model/Test.<init>(JLjava/lang/String;)V @2: invokevirtual
[error]   Reason:
[error]     Type uninitializedThis (current frame, stack[0]) is not assignable to 'zw/co/esol/eswitch/model/Test'
[error]   Current Frame:
[error]     bci: @2
[error]     flags: { flagThisUninit }
[error]     locals: { uninitializedThis, long, long_2nd, 'java/lang/String' }
[error]     stack: { uninitializedThis, long, long_2nd }
[error]   Bytecode:
[error]     0000000: 2a1f b600 932a 2db6 0096 2ab7 0099 2abb
[error]     0000010: 009b 592a b700 9eb5 00a0 2ab8 00a6 b1  
java.lang.VerifyError: Bad type on operand stack
Exception Details:
  Location:
    zw/co/esol/eswitch/model/Test.<init>(JLjava/lang/String;)V @2: invokevirtual
  Reason:
    Type uninitializedThis (current frame, stack[0]) is not assignable to 'zw/co/esol/eswitch/model/Test'
  Current Frame:
    bci: @2
    flags: { flagThisUninit }
    locals: { uninitializedThis, long, long_2nd, 'java/lang/String' }
    stack: { uninitializedThis, long, long_2nd }
  Bytecode:
    0000000: 2a1f b600 932a 2db6 0096 2ab7 0099 2abb
    0000010: 009b 592a b700 9eb5 00a0 2ab8 00a6 b1  

        at zw.co.esol.eswitch.database.EbeanDbServer$.init(EbeanDbServer.scala:45)
        at zw.co.esol.eswitch.global.ApplicationMain$.delayedEndpoint$zw$co$esol$eswitch$global$ApplicationMain$1(ApplicationMain.scala:28)
        at zw.co.esol.eswitch.global.ApplicationMain$delayedInit$body.apply(ApplicationMain.scala:17)
        at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
        at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.App$$anonfun$main$1.apply(App.scala:76)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
        at scala.App$class.main(App.scala:76)
        at zw.co.esol.eswitch.global.ApplicationMain$.main(ApplicationMain.scala:17)
        at zw.co.esol.eswitch.global.ApplicationMain.main(ApplicationMain.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)

I am using java version "1.7.0_55" on Linux x86_64.

What could I be doing wrong? I have tried to search for a solution but all the threads I have seen so far are not helping.

by Stanford Amos Bangaba at July 01, 2015 01:37 PM

/r/netsec

CompsciOverflow

Uniform Sampling on Intersection of Faces of Simplices

I'm trying to sample uniformly on the intersections of the faces of several simplicies, with all coordinates being non-negative. That is, given $$A\vec{x}=\vec{b} \ \ and \ \ \vec{x} \geq \vec{0},$$ I want to sample $\vec{x}$ uniformly. Just to clarify, if $A$'s dimension is $m \times n$, then $m \ll n$ (say, $m=15,n=10000$).

I am well aware that rejection-sampling and MCMC sampling can solve this problem. However, I have already implemented both approaches in programming, and neither of these two methods perform well enough. This is because the dimension of my sampling space usually goes up to 10000, and rejection sampling simply throws away too many points and MCMC is taking forever to converge. Therefore, I'm deperate to try new methods. (In other words, please only provide answer s with non-MCMC or non-rejection sampling methods.)

In a very limited setting, an approach is able to perform well on such a high dimension: Suppose I want to sample $k$ points in an $n$-dimensional sampling space, and $A=\vec{1}$. Then this sampling from a unit simplex. I can simply:

  1. sample $n-1$ points from $[0,b]$ and sort them into ascending order.

  2. Add 0 to the front of the list and $b$ to the end of the list. I now have a list (call it $L$) of $n+1$ elements.

  3. Do $d_i = u_{i+1}-u_i$ for each $u_i \in L$ except $u_{end} = b$. Then the list of $d_i$ has $n$ elements and follows a uniform distribution, and adds up to $b$.

This approach works very fast, and its convergence of uniformity is much better than MCMC sampling and rejection sampling. But obviously, the previous example stands on a setting that is too limited. So here comes my question:

Can the above sampling method be generalized into a method that can sample on a bunch of linear constraints? That is, A has multiple rows and non-negative entries that are not necessarily 0 and 1. A concrete example is:

Given $$A= \left( \begin{array}{ccc} 1 & 3 & 20 \\ 9 & 6 & 2 \end{array} \right), $$ $$b=\left( \begin{array}{c} 15 \\ 20 \end{array} \right),$$ sample $x$ where $x$ satisfies $Ax=b$. Many thanks in advance!!

by Miller Zhu at July 01, 2015 01:25 PM

StackOverflow

How using refined to express constraints with constants > 22

I am trying to explore possibilities with refined (and shapeless) to have improved type checking.

I would like to represent, with type, some constraints of interval or size.

So , with refined, i can writes things like that :

type Name = NonEmpty And MaxSize[_32]
type Driver = Greater[_15]

case class Employee(name : String @@ Name, age : Int @@ Driver = refineLit[Driver](18))

But, i would like to express contraints with naturals larger.

type BigNumber = Greater[_1000]

This one doesn't works, because _1000 is not defined. The last one already defined is _22 I can, with shapeless Succ, made my own, but it is very cumbersome.

Example :

type _25 = Succ[Succ[Succ[_22]]]
type _30 = Succ[Succ[Succ[Succ[Succ[_25]]]]]
type _40 = Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[_30]]]]]]]]]]
type _50 = Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[_40]]]]]]]]]]
type _60 = Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[_50]]]]]]]]]]
type _70 = Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[_60]]]]]]]]]]
type _80 = Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[_70]]]]]]]]]]
type _90 = Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[_80]]]]]]]]]]
type _100 = Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[Succ[_90]]]]]]]]]]
// etc.

Is there a better way to express such contraints, or to make _1000 in a more efficient way ? Is there something I would have missed ?

Edit :

I have tried the Travis proposition :

val thousand = shapeless.nat(1000)

But this line causes a StackOverflowError at compile time (at macro expansion) If I try with a lesser number, it is ok.

val limit = shapeless.nat(50)
type BigNumber = Greater[limit.N]

case class TestBigNumber(limit : Int @@ BigNumber)

In my environment, the StackOverflowError is raised for numbers greater than 400.

Moreover, with this code, compilation never ended (using sbt) :

val n_25 = shapeless.nat(25)
type _25 = n_25.N

val n_32 = shapeless.nat(32)
type _32 = n_32.N

val n_64 = shapeless.nat(64)
type _64 = n_64.N

by volia17 at July 01, 2015 01:17 PM

Scalate SSP fails on certain if blocks

Consider the following dummy SSP file:

#{
    val testMap1 =  Map("a" -> true, "b" -> true, "c" -> true)
    val testMap2 =  Map("d" -> false, "e" -> false, "f" -> true)

    val something = true
    val value = 3
}#

<html>
<head>
<title>Test</title>
</head>

<body>

#if(something)
    <h1>True</h1>
#elseif(value > 2)
    <h1>False, but higher</h1>
#else
    <h1>False</h1>
#end


#if(testMap1.forall(_._2) && testMap2.forall(_._2))
    <h1>All true</h1>
#elseif(testMap1.forall(_._2) || testMap2.forall(_._2))
    <h1>One map true</h1>
#else
    <h1>Neither</h1>
#end

which is rendered by the following:

import java.io.File
import org.fusesource.scalate.{ TemplateSource, TemplateEngine }

val engine = new TemplateEngine()
val file = new File("path/to/file.ssp")

engine.layout(TemplateSource.fromFile(file))

This will fail.

The first if block will nicely execute, but the second if block will fail with some SyntaxError (Missing if at... or Cannot have more than one else at ...). I don't see any syntax difference between the two blocks. I'm bumping into this problem each time I generate booleans "on the fly" with some functional stuff. Am I not allowed to do this?

For instance, if I change the second block to the following, it does work:

#{
    val testMap1true = testMap1.forall(_._2)
    val testMap2true = testMap2.forall(_._2)        
}#

#if(testMap1true && testMap2true)
    <h1>All true</h1>
#elseif(testMap1true || testMap2true))
    <h1>One map true</h1>
#else
    <h1>Neither</h1>
#end

What is going on here?

by Gx1sptDTDa at July 01, 2015 01:16 PM

/r/netsec

StackOverflow

How track json request sent to Elasticsearch via elastic4s client?

Say that I use such code:

ElasticClient client = ...
client.execute{search in "places"->"cities" query "paris" start 5 limit 10}

How to see what json request was been sent to Elasticsearch?

by Cherry at July 01, 2015 01:02 PM

Fefe

Schlangenöl ist ja nur die eine Seite der Industrie. ...

Schlangenöl ist ja nur die eine Seite der Industrie. Die andere Seite sind die Whitepapers. Alle Firmen, die was auf sich halten, veröffentlichen im Web irgendwelche Whitepapers mit euphorischen Geschichten über ihre Leistungen und Produkte. Manche Startups haben das Prinzip noch nicht verstanden und rücken ihre Whitepapers erst nach Registrierung raus, aber bei großen Firmen kann man die im Allgemeinen frei runterladen.

Mir fiel hier beim Rumklicken gerade ein besonders schönes Exemplar auf. Das ist ein Whitepaper von RSA über, ... ja so richtig klar ist das nicht. Ihr Team? Ihre Produkte? Es geht jedenfalls um Incident Response (auch bekannt als "Feuerwehr" oder "Ghostbusters"-Einsätze) in APT-Fällen (auch bekannt als "Windows-Trojaner"). Normale Malware gibt es ja gar nicht mehr, ist ja heutzutage alles APT. Mit APT können alle leben. Die gehackte Organisation, denn bei APT kann man halt nichts machen, alles 0day und military grade ultra-Geheimdienst-Qualität und so. Die untersuchenden Spezialisten, denn hey, wer kann schon von sich sagen, einen echten APT gefunden zu haben! Gut, inzwischen fast alle, aber wollen wir mal nicht so sein.

Aber gucken wir uns das Whitepaper mal an. Es geht damit los, dass sie eindrucksvoll schildern, wie raketentechnologisch roswellig ihre UFO-Technologie da ist! Während Villariba erst 10% untersucht hat, ist Villabajo dank RSA-Gehirnchirurgendequipment schon 90% fertig (Seite 5).

Da wird man doch neugierig, wie sie das anstellen! Und blättert weiter bis zum Kapitel 3, Analysis Methodology. Und was steht dort?

RSA IR employs a methodology that is founded on industry standards.
Wie jetzt, nanu? Ich dachte ihr seid der Industrie 90% voraus! Wie könnt ihr dann die selben Standards nehmen wie der Rest der Industrie? Das liegt bestimmt an dem ganzheitlichen Ansatz!
The holistic approach includes the following four core components:
  • Intelligence gathering and research;
  • Host-based forensic analysis;
  • Network-based forensic analysis; and,
  • Malware analysis.
Oder mit anderen Worten: Was auch alle anderen machen, wie das jeder tut, weil das offensichtlich ist.

Plötzlich ist dann von iterativen Ansätzen und wiederholbaren Prozessen die Rede, das klingt dann schon gar nicht mehr so 90% schneller als anderswo, und dann gibt es dieses Highlight:

To complete this work, RSA IR uses several commercial and open source forensic tools
Wie, Moment, ihr benutzt anderer Leute von-der-Stange-Tools? Ich dachte, EURE Tools seien der heiße Scheiß!
In addition, the Team will leverage available tools and technologies in place within the enterprise
Ja, äh, na was soll man auch sonst benutzen an der Stelle?

Aber gut, das ist ja immer noch recht abstrakt. Vielleicht werden die Dinge in der Case Study-Sektion klarer?

The RSA IR team used Volatility to analyze the submitted memory dump. Very quickly, while parsing the Application Compatibility cache from the memory image, RSA IR confirmed this server likely had unauthorized activity based on locations and filenames that were executed on the system.
Das ist ja mein persönliches Highlight. Erstens: Volatility ist ein Open-Source-Tool. Aber über Formulierungen wie "confirmed this server likely" könnte ich mich stundenlang amüsieren. Likely ist, was man vorher hat. Confirm macht aus likely ein definitely. Bestätigen, dass etwas wahrscheinlich passiert ist, geht nicht. Etwas ist wahrscheinlich passiert, dann bestätigt man das, dann weiß man, dass es passiert ist und braucht kein wahrscheinlich mehr.

Ihr seht schon: Whitepapers sind ein Quell der Unterhaltung für Leute, die sich für den sprachlichen Dreizack aus Bullshit, Schlangenöl und Herumwieseln interessieren.

Hey, vielleicht sollte ich auch mal ein Whitepaper schreiben. "Wir machen das selbe wie unsere Mitbewerber. Wir können es nur besser." :-)

July 01, 2015 01:01 PM

QuantOverflow

Is there a popular curve fitting formula of options skew vs strike price or vs Delta?

I was trying to build a options trading/optimization system. But it often gets more inaccurate as it scans through the far from ATM options because, you know, options skews.

That is because I did not price in options skews, or jump premium. I am wondering if there is a popular formula that takes "degree of options skew", and either strike price or Delta as inputs, and then give me skews premium in terms of IV as output.

Thank you very much.

by user496 at July 01, 2015 12:58 PM

StackOverflow

Use list of String as a parameter in elastic4s termsQuery

I use as elastic4s library to communicate with ElasticSearch. I would like to make an equivalent of "SELECT * FROM WHERE MY_INDEX MY_FIELD IN (VALUE_1, VALUE_2, ....)"

I produced that query

val req = search in indexName -> {query indexType
   {bool
     must (
       termsQuery ("myField" transformed (myListOfValues))
     )
   }
}

The method termsQuery is defined as follows in elastis4s

def termsQuery (field: String, capital gains: AnyRef *): TermsQueryDefinition

How can I turn my myListOfValues list to a AnyRef *

Thank you for your help.

by h.bell consulting at July 01, 2015 12:58 PM

Coercing Clojure's PersistentArrayMap to Java's Map<String, String>

I'm trying to send a client identification to IMAP server using com.sun.mail.imap.IMAPSSLStore's id method. The problem is that it requires a Map<String, String> as an argument, so the call

(.id store (HashMap. {"foo" "bar"}))

fails with IllegalArgumentException.

What am I doing wrong?

by lumrandir at July 01, 2015 12:57 PM

CompsciOverflow

Computing MD5 hashes of huge numbers [on hold]

I need to solve this riddle:

The riddle

So I need to compute $\operatorname{MD5}(2^{1024^2})$.

I know what MD5 is and I tried to compute the number inside the parantheses and insert it into an MD5 generator, which doesn't give me the correct answer, since the number is too big.

I also know that $2^{1024}$ is basically 1 byte and i thought 1 kilobyte is the answer, and again it's not correct.

Do you have any idea about how can I solve it?

by noamanza at July 01, 2015 12:54 PM

StackOverflow

Is it good practice to make case classes sealed?

The main reason to seal classes seems to be that this allows the compiler to do exthaustivity searches when pattern matching on those classes. Say I have data types meant for pattern matching. Toy example:

sealed trait Statement
case class Assign(name: String, value: Int) extends Statement
case class Print(name: String) extends Statement
case class IfZero(name: String, thenn: Statement, els: Option[Statement]) extends Statement
case class Block(statements: List[Statement]) extends Statement

The use case for these classes would be to consume them through pattern matching:

def execute(statement: Statement): Unit = statement match {
    case Assign(name, value)      => ???
    case Print(name)              => ???
    case IfZero(name, thenn, els) => ???
    case Block(statements)        => statements foreach { execute(_) }
  }

To this end, the Statement trait is sealed so that the compiler can warn me if I forget one statement kind in the match statement. But what about the case classes? Case classes cannot inherit from each other, but traits and ordinary classes can. So, is it good practice to seal the case classes as well? What could go wrong if I don't?

by Emil Lundberg at July 01, 2015 12:46 PM

Lobsters

StackOverflow

Scala: Inheritance of static methods vs. DRY-principle vs. encapsulation

I am trying to implement an OOP-paradigm in Scala. I am going to have one abstract base class with 50-100 subclasses. Each of those subclasses should be able to generate some random instances for testing purposes. (In fact my real life scenario is quite a bit more involved than that, but I think it will suffice for this question.)

No matter how I go about it, I am dissatisfied with the solution. I hope some of you Scala experts can help me think about this problem the Scala way.

If statics were allowed in Scala, I would be doing something like:

abstract class Base {
  protected val instanceValues: List[SomeType] // i.e. row-number, full-URL etc.
  def toString():String = "Base[" + classValues.toString() + "]: " + instanceValues.toString()
  protected static def classValues: SomeOtherType // i.e. table-name, domain-name etc.
  static def genData(): List[Base] = /* some default implementation using only classValues */
}

class A(override val instanceValues: List[SomeType]) extends Base {
  protected static def classValues = new SomeType(/* A-specific */)
}

...

class Z(override val instanceValues: List[SomeType]) extends Base {
  protected static def classValues = new SomeType(/* Z-specific */)
}

class SpecialCase(override val instanceValues: List[SomeType]) extends Base {
  protected protected def classValues = new SomeType(/* SpecialCase-specific */)
  override static def genData(): List[Base] = /* something specific to this subclass not easily expressed elegantly using classValues */
}

But, as statics are not allowed in Scala, was never really a solution.

Reading things like this (note: This question is not a duplicate of that one - this deals with the inelegancy - as I see it - of using the companion objects solution) it would instead appear that I need to create 28 identical companion objects to house the classValues- and genData-methods:

abstract class Base {
  protected val instanceValues: List[SomeType]
}

class A(override val instanceValues: List[SomeType]) extends Base {
  def toString():String = "Base[" + A.classValues.toString() + "]: " + instanceValues.toString()
}
object A {
  private val classValues: SomeOtherType
  static def genData(): List[Base] = /* some default implementation using only classValues */
}

...

class Z(override val instanceValues: List[SomeType]) extends Base {
  def toString():String = "Base[" + Z.classValues.toString() + "]: " + instanceValues.toString()
}
object Z {
  private val classValues: SomeOtherType
  static def genData(): List[Base] = /* some default implementation using only classValues */
}

class SpecialCase(override val instanceValues: List[SomeType]) extends Base {
  def toString():String = "Base[" + SpecialCase.classValues.toString() + "]: " + instanceValues.toString()
}
object SpecialCase {
  private val classValues: SomeOtherType
  static def genData(): List[Base] = /* something specific for SpecialCase */
}

Besides having quite a lot of bloat, this solution seems to violate the DRY-principle, and it forced me to re-implement the shared toString-method in a near-identical manner too. Finally it means that anyone extending Base, should remember to add a companion object for the new class.

A different solution, is to have a "test data"-factory:

abstract class Base {
  val instanceValues: List[SomeType]
  def classValues: SomeOtherType
  def toString():String = "Base[" + classValues.toString() + "]: " + instanceValues.toString()
}

object TestDataGenerator {
  def genData(clss:String): List[Base] = clss match {
    case "SpecialCase" => /* something specific to SpecialCase */
    case other => /* some default implementation using reflection for creation and some kind of manipulation of the SomeType and SomeOtherType objects after creation */
  }
}

class A(override val instanceValues: List[SomeType]) extends Base {
  def classValues = new SomeType(/* A-specific */)
}

...

class Z(override val instanceValues: List[SomeType]) extends Base {
  def classValues = new SomeType(/* Z-specific */)
}

class SpecialCase(override val instanceValues: List[SomeType]) extends Base {
  def classValues = new SomeType(/* SpecialCase-specific */)
}

But this requires me to open up read access to the fields instanceValues and classValues, which is not desirable.

by holbech at July 01, 2015 12:44 PM

Android/Scala project in IntelliJ 14 compiles, but crashes when launched not finding Scala class

I created a new Android project in Intellij 14, then added Scala SDK 2.11.6 to it (scope provided was the only option that worked for me). The project runs fine if I don't use any Scala class. But once I use, say, string interpolation, as soon as the code is run, the app crashes with this error:

06-20 18:36:27.277    1995-1995/com.pcn.android.games.jacks D/AndroidRuntime﹕ Shutting down VM
06-20 18:36:27.289    1995-1995/com.pcn.android.games.jacks E/AndroidRuntime﹕ FATAL EXCEPTION: main
    Process: com.pcn.android.games.jacks, PID: 1995
    java.lang.NoClassDefFoundError: Failed resolution of: Lscala/StringContext;
            at com.pcn.android.games.jacks.MyActivity.onCreate(MyActivity.scala:16)
            at android.app.Activity.performCreate(Activity.java:5990)
            at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1106)
            at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2278)
            at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2390)
            at android.app.ActivityThread.access$800(ActivityThread.java:151)
            at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303)
            at android.os.Handler.dispatchMessage(Handler.java:102)
            at android.os.Looper.loop(Looper.java:135)
            at android.app.ActivityThread.main(ActivityThread.java:5257)
            at java.lang.reflect.Method.invoke(Native Method)
            at java.lang.reflect.Method.invoke(Method.java:372)
            at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903)
            at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698)
     Caused by: java.lang.ClassNotFoundException: Didn't find class "scala.StringContext" on path: DexPathList[[zip file "/data/app/com.pcn.android.games.jacks-1/base.apk"],nativeLibraryDirectories=[/vendor/lib, /system/lib]]
            at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:511)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:469)
            at com.pcn.android.games.jacks.MyActivity.onCreate(MyActivity.scala:16)
            at android.app.Activity.performCreate(Activity.java:5990)
            at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1106)
            at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2278)
            at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2390)
            at android.app.ActivityThread.access$800(ActivityThread.java:151)
            at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1303)
            at android.os.Handler.dispatchMessage(Handler.java:102)
            at android.os.Looper.loop(Looper.java:135)
            at android.app.ActivityThread.main(ActivityThread.java:5257)
            at java.lang.reflect.Method.invoke(Native Method)
            at java.lang.reflect.Method.invoke(Method.java:372)
            at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:903)
            at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:698)
    Suppressed: java.lang.ClassNotFoundException: scala.StringContext
            at java.lang.Class.classForName(Native Method)
            at java.lang.BootClassLoader.findClass(ClassLoader.java:781)
            at java.lang.BootClassLoader.loadClass(ClassLoader.java:841)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:504)
            ... 15 more
     Caused by: java.lang.NoClassDefFoundError: Class not found using the boot class loader; no stack available

I use IntelliJ 14.1.3, Java 1.7.0_80. My proguard-project.txt file is empty, but this persists whether I turn on Proguard or not.

What have I done wrong that prevents Scala classes from being seen at runtime?

by Phil at July 01, 2015 12:43 PM

Slick 3 Transaction

I'm trying to figure out how to port my own closure table implementation from another language to Scala with concurrency in mind.

I have two models, a Node (id | parentID) and a NodeTree (id | ancestor | descendant) where each entry resembles an edge in the tree.

For each new node I must do the following: Query all the ancestors (or filter the TableQuery for them) and then add a NodeTree-Entry (an edge) for each ancestor

Thanks to panther I got this far:

private val nodes = TableQuery[Nodes]

override def create(node: Node): Future[Seq[Int]] =
    {
        val createNodesAction = (
            for
            {
                parent <- nodes
                node <- (nodeTrees returning nodeTrees.map(_.id) into ((ntEntry, ntId) => ntEntry.copy(id = Some(ntId))) += NodeTree(id = None, ancestor = parent.id, descendant = node.id, deleted = None, createdAt = new Timestamp(now.getTime), updatedAt = new Timestamp(now.getTime)))
            } yield (node)
        ).transactionally

        db run createNodesAction
    }

But this yields into a type mismatch;

type mismatch; found : slick.lifted.Rep[Long] required: Option[Long]

Once again: All I want to do is: For each parentNode (= each parent's parent until the last ancestor-node has no parent!) I want to create an entry in the nodeTree so that later I can easily grab all the descendants and ancestors with just another method call that filters through the NodeTree-Table.

(Just a closure table, really)

edit: These are my models

case class Node(id: Option[Long], parentID: Option[Long], level: Option[Long], deleted: Option[Boolean], createdAt: Timestamp, updatedAt: Timestamp)

class Nodes(tag: Tag) extends Table[Node](tag, "nodes")
{
    implicit val dateColumnType = MappedColumnType.base[Timestamp, Long](d => d.getTime, d => new Timestamp(d))

    def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
    def parentID = column[Long]("parent_id")
    def level = column[Long]("level")
    def deleted = column[Boolean]("deleted")
    def createdAt = column[Timestamp]("created_at")
    def updatedAt = column[Timestamp]("updated_at")

    def * = (id.?, parentID.?, level.?, deleted.?, createdAt, updatedAt) <> (Node.tupled, Node.unapply)
}

case class NodeTree(id: Option[Long], ancestor: Option[Long], descendant: Option[Long], deleted: Option[Boolean], createdAt: Timestamp, updatedAt: Timestamp)

class NodeTrees(tag: Tag) extends Table[NodeTree](tag, "nodetree")
{
    implicit val dateColumnType = MappedColumnType.base[Timestamp, Long](d => d.getTime, d => new Timestamp(d))

    def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
    def ancestor = column[Long]("ancestor")
    def descendant = column[Long]("descendant")
    def deleted = column[Boolean]("deleted")
    def createdAt = column[Timestamp]("created_at")
    def updatedAt = column[Timestamp]("updated_at")

    def * = (id.?, ancestor.?, descendant.?, deleted.?, createdAt, updatedAt) <> (NodeTree.tupled, NodeTree.unapply)
}

What I want to do is a closure table (http://technobytz.com/closure_table_store_hierarchical_data.html) that fills it's edges (nodeTree) automatically when I create a node. So I don't want to manually add all these entries to the database, but when I create a node on level 5 I want the whole path (= entries in the nodetree-table) to be created automatically.

I hope that clears stuff up a bit :)

by Teolha at July 01, 2015 12:39 PM

TheoryOverflow

Looking for easy applications of fractional cascading

I want to give a couple of talks on fractional cascading, one of which will focus on applications. I'm looking for applications that make use of the full version of fractional cascading, not just the simple search-in-k-sorted-lists version.

The applications presented in the original companion paper require a lot of machinery in addition to fractional cascading and are therefore not suitable. My audience is general CS people who don't have specialized data-structures or computational-geometry knowledge.

Any help will be much appreciated.

by Ari at July 01, 2015 12:37 PM

Is joint Kolmogorov Complexity order invariant?

Due to the symmetry of information, it follows up to an additive constant that

K(X,Y) = K(Y,X) 

Does this hold for more than two data objects as well?

by alucard at July 01, 2015 12:34 PM

StackOverflow

Nested Java hashmap to nested Scala map conversion

What is the right way to convert a variable of type java.util.HashMap<java.lang.String, java.util.List<java.lang.String>> in Java, to its Scala equivalent: Map[Map[String, List[String]]]? (with Scala Map, String and List)

I tried to use import scala.collection.JavaConverters._ and do JavaNestedMap.asScala but it failed. Is there a smart way of doing this (rather than having two maps)?

by Daniel at July 01, 2015 12:28 PM

slick 3.0.0 with HikariCP driver not loaded - IllegalAccessException: AbstractHikariConfig can not access a member with modifiers "private"

I am trying to use tminglei/slick-pg v9.0.0 with slick 3.0.0 and am getting an IllegalAccessException:

akka.actor.ActorInitializationException: exception during creation
    at akka.actor.ActorInitializationException$.apply(Actor.scala:166) ~[akka-actor_2.11-2.3.11.jar:na]
    ...
Caused by: java.lang.RuntimeException: driverClassName specified class 'com.github.tminglei.MyPostgresDriver$' could not be loaded
    at com.zaxxer.hikari.AbstractHikariConfig.setDriverClassName(AbstractHikariConfig.java:370) ~[HikariCP-java6-2.3.8.jar:na]
    at slick.jdbc.HikariCPJdbcDataSource$$anonfun$forConfig$18.apply(JdbcDataSource.scala:145) ~[slick_2.11-3.0.0.jar:na]
    at slick.jdbc.HikariCPJdbcDataSource$$anonfun$forConfig$18.apply(JdbcDataSource.scala:145) ~[slick_2.11-3.0.0.jar:na]
    at scala.Option.map(Option.scala:146) ~[scala-library-2.11.7.jar:na]
    at slick.jdbc.HikariCPJdbcDataSource$.forConfig(JdbcDataSource.scala:145) ~[slick_2.11-3.0.0.jar:na]
    at slick.jdbc.HikariCPJdbcDataSource$.forConfig(JdbcDataSource.scala:135) ~[slick_2.11-3.0.0.jar:na]
    at slick.jdbc.JdbcDataSource$.forConfig(JdbcDataSource.scala:35) ~[slick_2.11-3.0.0.jar:na]
    at slick.jdbc.JdbcBackend$DatabaseFactoryDef$class.forConfig(JdbcBackend.scala:223) ~[slick_2.11-3.0.0.jar:na]
    at slick.jdbc.JdbcBackend$$anon$3.forConfig(JdbcBackend.scala:33) ~[slick_2.11-3.0.0.jar:na]
    ...
Caused by: java.lang.IllegalAccessException: Class com.zaxxer.hikari.AbstractHikariConfig can not access a member of class com.github.tminglei.MyPostgresDriver$ with modifiers "private"
    at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109) ~[na:1.7.0_79]
    at java.lang.Class.newInstance(Class.java:373) ~[na:1.7.0_79]
    at com.zaxxer.hikari.AbstractHikariConfig.setDriverClassName(AbstractHikariConfig.java:366) ~[HikariCP-java6-2.3.8.jar:na]
    ... 43 common frames omitted

HikariCP is the default connection pool in slick 3.0.0

I have defined the driver class much like in the example:

trait MyPostgresDriver extends ExPostgresDriver with PgArraySupport
  with PgEnumSupport
  with PgRangeSupport
  with PgHStoreSupport
  with PgSearchSupport{

  override val api = new MyAPI {}

  //////
  trait MyAPI extends API
  with ArrayImplicits
  with RangeImplicits
  with HStoreImplicits
  with SearchImplicits
  with SearchAssistants

}

object MyPostgresDriver extends MyPostgresDriver

My database config is pretty straightforward [excerpt of typesafe config follows]:

slick.dbs.default {

  driver="com.github.tminglei.MyPostgresDriver$"

  db {
    driver="org.postgresql.Driver"

    url="jdbc:postgresql://hostname:port/dbname"
    user=user
    password="pass"
  }
}

It does not seem as if it should not work, and yet...

Should I change my driver class somehow? Is it something else?

Note: as can be seen in the stacktrace I am using

  1. Java 1.7.0_79
  2. Scala 2.11.7
  3. akka 2.3.11 (I share the config instance for slick and akka)
  4. slick 3.0.0
  5. HikariCP-java6 2.3.8
  6. tminglei's slick-pg_core 0.9.0

Lastly, when debugging thru the jdk code at Class.class (decompiled line 143)

 Constructor tmpConstructor1 = this.cachedConstructor; 

I get the following (toString'ed) value (as shown by intellij):

private com.github.tminglei.MyPostgresDriver$()

Could this be indicative of the problem? If so how should I fix it?


EDIT

I have replaced the custom driver configuration with the stock PostgresDriver like so:

slick.dbs.default {

  driver="slick.driver.PostgresDriver$"

  db {
    driver="org.postgresql.Driver"

    url="jdbc:postgresql://hostname:port/dbname"
    user=user
    password="pass"
  }
}

The error is the same:

akka.actor.ActorInitializationException: exception during creation
    ...
Caused by: java.lang.RuntimeException: driverClassName specified class 'slick.driver.PostgresDriver$' could not be loaded
    ... 
Caused by: java.lang.IllegalAccessException: Class com.zaxxer.hikari.AbstractHikariConfig can not access a member of class slick.driver.PostgresDriver$ with modifiers "private"

by Yaneeve at July 01, 2015 12:27 PM

Undeadly

Out With the Old, in With the New

Ted Unangst (tedu@) has given out a blog post detailing some of the recent work going into OpenBSD:

Notes and thoughts on various OpenBSD replacements and reductions. Existing functionality and programs are frequently rewritten and replaced for the sake of simplicity or security or whatever it is that OpenBSD is all about. This process has been going on for some time, of course, but some recent activity is worth highlighting.

Read more...

July 01, 2015 12:25 PM

StackOverflow

RxScala Observables vs Play Framework Enumerators

How does using Play Frameworks Enumerators, Iteratees, and Enumeratees, compare to using RxScalas Observables, Subscriptions, etc… for asynchronous data flows?

In what type of scenarios would you choose to use RxScala, and when would you choose Play?

If you had big data flowing through your stream would that affect your decision?

by zunior at July 01, 2015 12:25 PM

Ragtime Migrate with Environment Variables throwing Error (Heroku Deployment)

I'm trying to run lein ragtime migrate on a heroku dyno. Normally, I would set the database path in my project.clj like so:

(defproject my-project "0.1.0-SNAPSHOT"
  :min-lein-version "2.0.0"
  :dependencies [[org.clojure/clojure "1.6.0"]
                 [org.clojure/java.jdbc "0.3.7"]
                 [postgresql "9.3-1102.jdbc41"]
                 [ragtime "0.3.9"]
                 [ring "1.4.0-RC1"]
                 [ring/ring-defaults "0.1.2"]]
  :plugins [[lein-ring "0.8.13"]
            [ragtime/ragtime.lein "0.3.9"]]
   ...
  :ragtime {:migrations ragtime.sql.files/migrations
            :database (System/getenv "DATABASE_URL")}
   ...

  :profiles
  {:dev {:dependencies [[javax.servlet/servlet-api "2.5"]
                        [ring-mock "0.1.5"]]
   :test {:ragtime {:database (System/getenv "DATABASE_URL")}}})

When I run the command, I get the following error both locally and depolying over Heroku

java.lang.IllegalArgumentException: No method in multimethod 'connection' for dispatch value: postgres

Any pointers in the right direction would be very appreciated.

by TheStudent at July 01, 2015 12:20 PM

/r/emacs

Emacs navigation bar ??

Is there any navigation bar, that shows all symbols, variables, classes and etc to Emacs like in the picture below ??

http://imgur.com/UghiEOR

submitted by eniacsparc2xyz
[link] [6 comments]

July 01, 2015 12:17 PM

StackOverflow

Scala case class arity limit and jvm 254 limit

What is the new Scala case class arity limit.

In scala 2.11 the 22 limit was removed for case class.

What is the new limit?

Can it go beyond jvm limit of 254

Thanks

by user2230605 at July 01, 2015 12:12 PM

CompsciOverflow

Does every language that fulfills the regular Pumping conditions also fulfill the context-free ones?

Let L be a language that fulfills the properties implies by the Pumping lemma for regular languages. Does L necessarily fulfill the corresponding properties of the Pumping lemma for context-free languages as well?

by gsdfgdsfgdfg at July 01, 2015 12:10 PM

StackOverflow

Pure functional programming to the GPU

I've been wanting to play around with functional code, and thought what fun it would be to do some interactive real-time ray-tracing of some randomly composed functions. Does anyone know of any compiler or converter that can take code from a functional language (or any language, with high order functions) and convert it into something that can be used in CUDA or OpenCL?

by clinux at July 01, 2015 12:06 PM

QuantOverflow

Models crumbling down due to negative (nominal) interest rates

Dear Stackexchange users,

given that the negative interest rates on a lot of sovereign bonds with maturity under 10 years are trading in the negative (nominal) interest rate territory (recently also the short term EURIBOR has dropped below zero), which are the most striking applications for the models in financial economics/quant finance field? By that I mean which of the so called "stylised facts" and standard models of modern finance are becoming highly controversial or just plain useless? As a couple of examples which spring to mind are the following (do not necessarily have to do with sovereign bond yields, but the concept of negative (nominal) interest rates as such):

  • The CIR interest rates model completely breaks down due to the square root term
  • The proof that an American call option written on a non-dividend paying underlying will not be exercised before the maturity is false
  • Markowitz selection obviously encounters difficulties incorporating negative yields

What are the other consequences, on let us say, CAPM, APT, M&M or any other model in finance? Which long held beliefs are hurt the most by negative yields?

by user3612816 at July 01, 2015 11:59 AM

What are the main market efficiency measures in the stock market?

I'm going to test for the effect of the change in market efficiency on the stock market portfolio, and, I want to know what are the main measures known in the academic literature in order to compare them and to choose the "best" one for a given market.

Till now, I found the following measures to test for market efficiency in the broad market-specific case:

  • Variance Ratio (Lo & MacKinlay, 1988)
  • Approximate Entropy (Pincus, 1991)
  • The Hurst exponent (Peters, 1994)
  • Mkt Delay (Pagano & Schwartz, 2002)

For the firm-specific case, I found that the event-study procedure is the most common to test the market efficiency.

Could you suggest other measures or other procedures to test for market efficiency in the stock market, in addition to the ones I cited above, or, alternatively, suggest which is the best one by providing a reference?

by Quantopic at July 01, 2015 11:58 AM

StackOverflow

Efficiently take one value for each key out of a RDD[(key,value)]

My starting point is an RDD[(key,value)] in Scala using Apache Spark. The RDD contains roughly 15 million tuples. Each key has roughly 50+-20 values.

Now I'd like to take one value (doesn't matter which one) for each key. My current approach is the following:

  1. HashPartition the RDD by the key. (There is no significant skew)
  2. Group the tuples by key resulting in RDD[(key, array of values)]]
  3. Take the first of each value array

Basically looks like this:

...
candidates
.groupByKey()
.map(c => (c._1, c._2.head)
...

The grouping is the expensive part. It is still fast because there is no network shuffle and candidates is in memory but can I do it faster?

My idea was to work on the partitions directly, but I'm not sure what I get out of the HashPartition. If I take the first tuple of each partition, I will get every key but maybe multiple tuples for a single key depending on the number of partitions? Or will I miss keys?

Thank you!

by gausss at July 01, 2015 11:51 AM

QuantOverflow

How to price an European Call/Put Option of a jump difussion Process?

Lets have the next jump difussion Stochastic Process: $$S_t = S_0 e^{\sigma W_t + (v-\frac{\sigma ^2}{2})t}\prod_{i=1}^{N_t}(1+J_i)$$

where $W_t$ is the Brownian Motion, hence $G_t \equiv e^{\sigma W_t + (v-\frac{\sigma ^2}{2})t}$ is the Geometric Brownian Motion, $N_t$ is the Poisson Process and $R_t \equiv \prod_{i=1}^{N_t}(1+J_i)$ is the Multiplicative Poisson Compound Process.

Suppose there exists a Martingale Probability $\mathbb{Q}$ and that under $\mathbb{Q}$ the Girsanov Theorem hipótesis valid. Moreover suppose that under $\mathbb{Q}$ $N_t$ has a Poisson Rate $\widehat{\lambda}$.

In this context I have to price the European Put Option of S_t, that is

$$P=e^{-r(T-t)}\mathbb{E}_\mathbb{Q}((k-S_T)_+|S_t)$$

I have thought this way, but I don´t know if it is correct. \begin{eqnarray} \mathbb{E}_\mathbb{Q}((k-S_T)|S_t)_+& = & \mathbb{E}_\mathbb{Q}((k-S_0 e^{\sigma W_T + (v-\frac{\sigma ^2}{2})T}\prod_{i=1}^{N_T}(1+J_i))_+|S_t)\\ & = & \mathbb{E}_\mathbb{Q}((k-S_t e^{\sigma (W_T-W_t) + (v-\frac{\sigma ^2}{2})(T-t)}\prod_{i={N_{t}+1}}^{N_T}(1+J_i))_+)\\ & = & \mathbb{E}_\mathbb{Q}(\mathbb{E}_\mathbb{Q}((k-S_t e^{\sigma (W_T-W_t) + (v-\frac{\sigma ^2}{2})(T-t)}\prod_{i={N_{t}+1}}^{N_T}(1+J_i))_+|N_T-N_t=n))\\ & = & \mathbb{P}_\mathbb{Q}(N_T-N_t=n))(\mathbb{E}_\mathbb{Q}((k-S_t e^{\sigma (W_T-W_t) + (v-\frac{\sigma ^2}{2})(T-t)}\prod_{i={1}}^{n}(1+J_i)_+) \end{eqnarray}

So finally, $$\mathbb{P}_\mathbb{Q}(N_T-N_t=n)=e^{-\widehat{\lambda}(t-t)}\frac{(\widehat{\lambda} (T-t))^n}{n!}$$

And $\mathbb{E}_\mathbb{Q}((k-S_t e^{\sigma (W_T-W_t) + (v-\frac{\sigma ^2}{2})(T-t)}\prod_{i={1}}^{n}(1+J_i)_+)$ can be calculated using Black-Sholes usual formula.

Is this okay or is it another way? Thanks! :)

by Edin_91 at July 01, 2015 11:47 AM

StackOverflow

aggregate data for uniquely tagged values in a list in scala

I was wondering if somebody could help.

I'm trying to aggregate some data in a list based on id values, I have a listBuffer which is updated from a foreach function. My output means I have an id number and a value, because the foreach applies a function to each id often more than once, the list I end up with looks something like the following:

ListBuffer(3106;0, 3106;3, 3108;2, 3108;0, 3110;1, 3110;2, 3113;0, 3113;2, 3113;0)

What I want to do is apply a simple function to aggregate this data, so I am left with

List(3106;3 ,3108;2, 3110;3, 3113;2)

I thought this could be done with foldLeft or groupBy, however I'm not sure how to get it to recognise id values and normal values.

Any help or pointers would be much appreciated

by ALs at July 01, 2015 11:43 AM

Scala + Spark collections interactions

I'm working under my little project that using graph as the main structure. Graph consists of Vertices that have this structure:

class SWVertex[T: ClassTag](
   val id: Long, 
   val data: T, 
   var neighbors: Vector[Long] = Vector.empty[Long], 
   val timestamp: Timestamp = new Timestamp(System.currentTimeMillis())
) extends Serializable { 
   def addNeighbor(neighbor: Long): Unit = {
      if (neighbor >= 0) { neighbors = neighbors :+ neighbor }
   }
}

Notes:

  1. There are will be a lot of vertices, possibly over MAX_INT I think.
  2. Each vertex has a mutable array of neighbors (which are just ID's of another vertices).
  3. There are special function for adding vertex to the graph that using BFS algorithm to choose the best vertex in graph for connecting new vertex - modifying existing and adding vertices' neighbors arrays.

I've decided to use Apache Spark and Scala for processing and navigating through my graph, but I stuck with some misunderstandings: I know, that RDD is a parallel dataset, which I'm making from main collection using parallelize() method and I've discovered, that modifying source collection will take affect on created RDD as well. I used this piece of code to find this out:

val newVertex1 = new SWVertex[String](1, "test1")
val newVertex2 = new SWVertex[String](2, "test2")
var vertexData = Seq(newVertex1, newVertex2)

val testRDD1 = sc.parallelize(vertexData, vertexData.length)

testRDD1.collect().foreach(
   f => println("| ID: " + f.id + ", data: " + f.data + ", neighbors: "
   + f.neighbors.mkString(", "))
)

// The result is:
// | ID: 1, data: test1, neighbors: 
// | ID: 2, data: test2, neighbors: 


// Calling simple procedure, that uses `addNeighbor` on both parameters
makeFriends(vertexData(0), vertexData(1))

testRDD1.collect().foreach(
   f => println("| ID: " + f.id + ", data: " + f.data + ", neighbors: "
   + f.neighbors.mkString(", "))
)

// Now the result is:
// | ID: 1, data: test1, neighbors: 2
// | ID: 2, data: test2, neighbors: 1

, but I didn't found the way to make the same thing using RDD methods (and honestly I'm not sure that this is even possible due to RDD immutability). In this case, the question is:

Is there any way to deal with such big amount of data, keeping the ability to access to the random vertices for modifying their neighbors lists and continuous appending of new vertices?

I believe that solution must be in using some kind of Vector data structures, and in this case I have another question:

Is it possible to store Scala structures in cluster memory?

P.S. I'm planning to use Spark for processing BFS search at least, but I will be really happy to hear any of other suggestions.

P.P.S. I've read about .view method for creating "lazy" collections transformations, but still have no clue how it could be used...

Update 1: As far as I'm reading Scala Cookbook, I think that choosing Vector will be the best choice, because working with graph in my case means a lot of random accessing to the vertices aka elements of the graph and appending new vertices, but still - I'm not sure that using Vector for such large amount of vertices won't cause OutOfMemoryException

Update 2: I've found several interesting things going on with the memory in the test above. Here's the deal (keep in mind, I'm using single-node Spark cluster):

// Test were performed using these lines of code:
val runtime = Runtime.getRuntime
var usedMemory = runtime.totalMemory - runtime.freeMemory

// In the beginning of my work, before creating vertices and collection:
usedMemory = 191066456 bytes // ~182 MB, 1st run 
usedMemory = 173991072 bytes // ~166 MB, 2nd run
// After creating collection with two vertices:
usedMemory = 191066456 bytes // ~182 MB, 1st run
usedMemory = 173991072 bytes // ~166 MB, 2nd run
// After creating testRDD1
usedMemory = 191066552 bytes // ~182 MB, 1st run 
usedMemory = 173991168 bytes // ~166 MB, 2nd run
// After performing first testRDD1.collect() function
usedMemory = 212618296 bytes // ~203 MB, 1st run 
usedMemory = 200733808 bytes // ~191 MB, 2nd run
// After calling makeFriends on source collection
usedMemory = 212618296 bytes // ~203 MB, 1st run 
usedMemory = 200733808 bytes // ~191 MB, 2nd run
// After calling testRDD1.collect() for modified collection
usedMemory = 216645128 bytes // ~207 MB, 1st run 
usedMemory = 203955264 bytes // ~195 MB, 2nd run

I know that this amount of test is too low to be sure in my conclusions, but I noticed, that:

  1. There's nothing happens, when you creating collection.
  2. After creating RDD on this sample, there are 96 bytes allocated, perhaps for storing partitions data or something.
  3. The most amount of memory was allocated when I called .collect() method, because I basically collect all data to one node, and, probably because of single-node Spark installation, I'm getting double copy of data (not sure here), which has taken about 23 MB of memory.
  4. Interesting moment happens after modifying neighbors' arrays, which requires additional 4 MB of memory to store them.

by SuppieRK at July 01, 2015 11:39 AM

Clojure - idiomatic way to write split-first and split-last

What would be the idiomatic way of writing the following function ?

(split-first #"." "abc.def.ghi") ;;=>["abc", "def.ghi"]
(split-last #"." "abc.def.ghi") ;;=>["abc.def", "ghi"]

There is an obvious (ugly ?) solution using split, but I'm sure there are more elegant solutions ? Maybe using regexes/indexOf/split-with ?

by nha at July 01, 2015 11:36 AM

Lobsters

TheoryOverflow

Non-Midpoint Segment Splitting in Ruppert's Delaunay Triangulation Refinement Algorithm

Roughly speaking, in Ruppert's Delaunay Triangulation refinement algorithm, so called encroached edges are split until no more encroached edges remain.

The algorithm specifies splitting the edges at their midpoint (except in the case of small input angles where concentric circular shells are suggested. This question is unrelated to these cases.)

In certain domains, given a segment, there are points on the segment that I would prefer to split on that are not necessarily the midpoints (unrelated to the concentric shell trick). These points are chosen based on some domain specific underlying data (considerations beyond the graph structure the algorithm is aware of).

  • What are the implications of splitting on non-midpoints?
  • What needs to be taken into consideration when selecting among several non-midpoint candidates?
  • Does splitting on non-midpoints affect any of the convergence properties of the algorithm?

Another way to ask this is: Are there split points that are better (by some interesting measures) than the a-priori selected midpoints?

by Adi Shavit at July 01, 2015 11:25 AM

/r/scala

scaladoc : how to find a procedure like propablyPrime

Hello,

Can someone explain to me how I can find for example which class propabyPrime is ?

Roelof

submitted by rwobben
[link] [1 comment]

July 01, 2015 11:24 AM

CompsciOverflow

Reduction NP-Complete with graph undirected [duplicate]

This question already has an answer here:

Given a graph undirected $G=(V,E)$ a subset $I$ of $V$ is indipendent for each couples of vertices u,v in $I$ and {$u,v$} is not in $E$. Prove that the language $L$={$<G,k>$: $k$ is a positive integer and exist a indipendent set $I$ of cardinality $k$ in $G$} is NP-complete.You show a reduction of the language NP-complete of the formule boolean satisfy. You can assume that the formula is express in normal form linking and that every clause contain exactly three literals

PS: problem very hard I don't know since where to start. PSS: Duplicate, but I don't found the solution, Could you post the link of the solution?

by user47845 at July 01, 2015 11:18 AM

Fred Wilson

Using Coding To Teach Algebra

Algebra is a turning point for many students. Addition, subtraction, multiplication, division, and solving equations makes sense to most students because they come across these notions in their every day life. But functions are something completely different. It’s the first abstraction most students come across in their study of math. And I’ve seen a lot of students start to dislike math when they get to algebra. They get frustrated that they just don’t get it. They tune out and turn off to math. And that’s a shame. Because math is powerful stuff. It is the key to so much.

I’ve been really impressed with a program called Bootstrap. It is a curriculum module that math teachers can drop into an algebra (or geometry) class. It maps to the common core. In Bootstrap, students use code to make a simple videogame which they really enjoy doing. But they are also learning functions in the course of coding up their game. Once they understand how functions work in code, Bootstrap makes the leap to algebraic functions. And students get it. Because it is tangible to them instead of being this abstract concept they just don’t grok.

This is just one example, but a powerful one, of how learning to code also teaches students important other concepts that they need to learn to advance in their studies.

People ask me why teaching kids to code is so important. They ask “we don’t want everyone to become a software engineer, do we?” And of course the answer to that question is no. But coding is an important intervention device into a student’s learning. Just like writing an essay or doing a workbook is an important intervention. Coding unlocks comprehension and understanding of certain hard to understand concepts in a fun and tangible way.

And Bootstrap is a great example of that. If you are a math teacher for or a parent of students between 12 and 16, you should check it out.

by Fred Wilson at July 01, 2015 11:16 AM

StackOverflow

Spark Override Accumulator

I cannot seem to find any examples of a way to override a Spark Accumulator. I have data in a key/value format with the key being the column index. My function below filters out things that are not digits. My goal is to track how many empties per column are found.

I have the following filter:

val numFilterRDD = numRDD.filter(filterNum)

    def isAllDigits(x: String) = x matches """^\d{1,}\.*\d*$"""
    def filterNum(x: (Int, String)) : Boolean = {
      accumNum.add(1)
      if(isAllDigits(x._2)) true
      else false
    }

Right now the solution is too passes, I need to do the following before the filter:

val originalCountNum = numRDD.map(x => (x._1, 1)).reduceByKey(_ + _).collect()

And finally a comparison of the two. Is this possible with accumulators to be able to track column index + empty count, it would remove the additional pass of the original count.

by theMadKing at July 01, 2015 11:10 AM

How to block based on Mac address on FreeBsd? (ipfw firewall)

i have worked on freebsd for a while.I installed ipfw configuration firewall and setup it. I want to filter based on Mac Adress.How can i do it?I wrote that but it didn't work.

ipfw add 4 allow ip from any to any layer 2 mac-type arp
ipfw add 5 deny ip from any to any MAC any 1A:BF:48:9F:71:3B in recv $em0
ipfw add 6 deny ip from any to any MAC any 1A:BF:48:9F:71:3B any in recv $em1

Thank you for your answer

SOLUTION && CORRECT ANSWER

ipfw add 4 allow ip from any to any layer 2 mac type-arp
ipfw add 5 deny ip from any to any MAC any 10:BF:48:9F:74:6C in recv em1
ipfw add 6 deny ip from any to any MAC 10:BF:48:9F:74:6C any out xmit em0
ipfw add 7 allow ip from any to any MAC any any

by SerefAltindal at July 01, 2015 10:55 AM

DragonFly BSD Digest

NYCBUG: Precision Time Protocol

NYCBUG is having a chronologically appropriate speaker: Steven Kreuzer, talking about the Precision Time Protocol.  It’s 6:45 PM (EDT) tonight, at the Stone Creek Bar & Lounge in New York City.

by Justin Sherrill at July 01, 2015 10:50 AM

CompsciOverflow

Why is O(n log n) the best runtime there is?

I am taking a course on Coursera about algorithm design. The course said that a time of $O(n \log n)$ is considered to be good.

However, there are faster runtimes such as (from now on just assume it is in big o notataion), such as $1$, $n$, and $\log n$. I understand that it is almost impossible to have a algorithm in $O(1)$ as that is for trivial tasks such as adding or multiplication.

However, why is there no algorithm in $O(\log n)$ time? Mergesort is considered to be a great algorithm and in its best and worst case scenario it is $O(n \log n)$ (I could be wrong). Why is merge sort considered to be a great algorithm when $O(n \log n)$ is only faster than $n^2$, $n!$, and $2^n$?

Also, why is there no sort with a time of $\log n$? This is not a homework question by the way, I am just purely curious as to why no sorting algorithm is in $\log n$ time.

by user1477539 at July 01, 2015 10:42 AM

StackOverflow

What is the advantage of global functions when writing functional code

I am a Swift developer and am trying to adopt a functional / reactive style in my code. I have been using ReactiveCocoa in all my projects and I have started giving RAC 3.0 a try. One thing I have seen is that in project, there is heavy use of curried functions that have a global scope (i.e. not tied to an instance).

What I am keen to understand is why global functions is a good idea?

Is this something that is unique to curried functions or is it a general functional programming attribute?

by villy393 at July 01, 2015 10:41 AM

Scala syntax description

i'm looking for any resourse with scala syntax description. for example, right now i tried to understand what doing this function:

reduceByKey(_ ++ _)

but i don't able to find what does mean ++ character... i looked at: http://www.scala-lang.org/files/archive/spec/2.11/ but it doesn't answer on my question. possobly someone could advice good resource like "underastanding scala" with good detailed examples

thanks!

by Mijatovic at July 01, 2015 10:36 AM

/r/netsec

StackOverflow

clojure unit-testing with-redefs

I have something like this:

(ns server.core
  (:require [db.api :as d]))

(defrecord Server [host port instance]
  (start [c]
    (let [connection (d/connect (:host c) (:port c))]
      (assoc c :instance connection)))
  (stop [c]
    ;; close the connection
    ))

(defn new-server
  [host port]
  (map->Server {:host host
                :port port}))

And the unit-tests code

(ns server.core_test
  (:require [server.core :refer :all]
            [clojure.test :refer :all]))

(deftest server-test
  (testing "Calling start should populate :instance"
    (with-redefs [d/connect (fn [h p] [h p])]
      (let [server (start (new-server "foobar" 12313123))]
        (is (-> server :instance nil? not))))))

Running the code above with boot watch test throws an error similar to:

Unable to resolve var: d/connect in this context

And then I modify the test code so it requires the db.api

(ns server.core_test
  (:require [server.core :refer :all]
            [clojure.test :refer :all]
            [db.api :as d]))

I ran the tests again, this time d/connect still refers to db.api.

Any advice?

by user5005595 at July 01, 2015 10:25 AM

/r/netsec

Fefe

Habt ihr das auch gesehen, gestern, im ZDF? Bei "heute"? ...

Habt ihr das auch gesehen, gestern, im ZDF? Bei "heute"? Das hier (ab 45 Sekunden)?
Letztendlich geht es dann um die Frage: Wollen die Griechen im Euro bleiben, in der Eurozone? Und wenn ich mich hier mal umschauen würde, was heute sich auf dem Platz vor dem Parlament abstimmt (sic!), das ist die größte Demonstration seit Tagen, und das sind alles Leute, die gerne im Euro bleiben würden.
Ach. Ach ja? Nun, schauen wir doch mal, was Russia Today berichtet:
Demonstranten versammeln sich heute auf dem Syntagma-Platz (Platz der Verfassung) vor dem griechischen Parlament in Athen, um gegen die Sparpolitik zu demonstrieren und um die griechischen Bürger dafür zu motivieren am Sonntag gegen die Reformpläne der Gläubiger zu stimmen.
Ja, gut, aber RT darf man ja nicht trauen, das ist ja ein fieser Propagandasender! Daher gucken wir doch mal bei der BBC vorbei. Nanu, die Nein-Demonstration?

Wie passt das zusammen? Nun, es gab schlicht zwei Demonstrationen. Auf dem Video bei der FAZ (kommt eigentlich von Reuters) sieht man ein großes NAI-Transparent (ναί heißt "ja").

Aber wenn man das erfahren will, muss man zu RT gehen, die schreiben:

Heute wollen Befürworter der EU-Reformen auf dem Syntagma-Platz vor dem griechischen Parlament in Athen demonstrieren und werden dort eventuell auf Gegner der EU-Reformen treffen, die dort bereits gestern ihre Solidarität mit der Regierung von Premier Alexis Tsipras zum Ausdruck brachten.
Die griechische Bevölkerung ist schlicht gespalten in der Frage.

July 01, 2015 10:01 AM

Gestern so: Windows 10: Neue Vorabversion "ohne nennenswerte ...

Gestern so: Windows 10: Neue Vorabversion "ohne nennenswerte Fehler".

Heute so:

Nur einen Tag nach Build 10158, die angeblich "keine nennenswerten Fehler" enthielt, hat Microsoft Build 10159 veröffentlicht. Sie behebt laut Microsoft über 300 Fehler der Vorgängerversion.

Ja nee, klar.

July 01, 2015 10:01 AM

Hier kommt eine lange Mail zur Dogfight-Fähigkeit ...

Hier kommt eine lange Mail zur Dogfight-Fähigkeit des F-35 rein, die ich euch mal zeigen möchte:
Kurz und knapp:
Die F-35 ist ein Mehrzweckkampfflugzeug mit Tarnkappeneigenschaften. Das soll keinen Dog-Fight können. Es ist dafür entwickelt worden seine Mission zu erfüllen ohne von der gegnerischen Luftabwehr abgeschossen zu werden.

Da hat man eine eher unwichtige Eigenschaft eines modernen Kampfflugzeuges getestet und zwei grundverschieden Flugzeuge verglichen. Die F-35 ist im Normalfall doppelt so schwer wie die F-16. Das maximale Lastvielfache ist ein Indiz wie eng ein Flugzeug kurven kann.
Das der F-16 ist doppelt so hoch. Wie man solche Aussagen bewerten sollte beschreibe ich unten.

Siehe auch:
https://en.wikipedia.org/wiki/Post%E2%80%93World_War_II_air-to-air_combat_losses
https://de.wikipedia.org/wiki/Lockheed_Martin_F-35#Technische_Daten
https://de.wikipedia.org/wiki/General_Dynamics_F-16#Technische_Daten

Ausführlich:

Ich habe während meiner Offiziersausbildungszeit bei der Luftwaffe und später beim Studium (Luft- und Raumfahrttechnik) sehr viel zu diesem Thema gehört. Vieles immer nur mündlich, daher hier keine großen Quellen.

Dieses "Gejammer" der Piloten über die Dog-Fight-Fähigkeiten der Kampfflugzeuge gibt es schon sehr lang. (Selbst gehört von angehenden Piloten, die noch nie im Flieger saßen!)
Auf Flugmediziner Seite erzählte man, dass das alles nur ein psychologisches Problem der Piloten ist. Die Piloten wollen, verständlicherweise, immer in der Lage sein sich selbst zu verteidigen falls es zu einer Begegnung mit einem feindlichen Kampfflugzeug kommt. Daher bestehen sie unter anderem darauf, ein Bordgeschütz im Flugzeug mitzuführen. Dazu gleich mehr.
In Kampfeinsätzen kommt es aber so gut wie nie zu einer Begegnung mit einem feindlichen Flugzeug, da der Luftraum z.B. durch AWACS komplett überwacht wird und man nur mit Absicht auf feindliche Flugzeuge stößt. Der Dog-Fight ist zwar noch immer Teil des Trainings, wird aber immer weiter reduziert. Die Bundeswehr übt das z.B. im Übungsgebiet vor der Küste Sardiniens. Da hat man mir erzählt dass das nur noch als eine Art Belohnung für die Piloten veranstaltet wird und für Tests neuer Systeme. Aber in den 80er-90er Jahren trat man sich auf den Füßen rum so viel war da los.
Gemessen an der Verlusten von Flugzeugen durch andere Flugzeuge im Dog-Fight sind jedoch die Verluste durch Flugabwehr und Pilotenfehler ein viel größeres Problem. Daher setzen alle neuen Flugzeuge auf Stealth-Eigenschafen und Selbstschutzmaßnahmen. Es ist viel wichtiger ein Flugzeug auf diese Anforderungen zu zuschneiden als auf den Dog-Fight. Ich denke, da mag einer einfach das Flugzeug nicht. Das Geheule gab es auch als die F-14 aus dem Verkehr gezogen wurde und durch die weit überlegene F-18 ersetzt wurde. Was war das für ein Drama :).

Zum Thema Bordkanone:
Im Eurofighter war ursprünglich keine Bordkanone vorgesehen, weil man aus den Statistiken wusste, dass sie quasi nur im Einsatz gegen Ziele am Boden bedingt nützlich waren. Gegen Flugzeuge war die Ausbeute bescheiden. Fast alle Abschüsse auf mittlere Entfernung und im Dog-Fight wurden mit Raketen erziehlt. Daher wurde extra für den Eurofighter die IRIS-T (https://de.wikipedia.org/wiki/IRIS-T) entwickelt. Das verzichten auf die Bordkanone spart jede Menge Gewicht und macht ein Flugzeug dadurch leistungsfähiger, in fast allen Bereichen. Auch im Dog-Fight. Alle Entwickler sahen das so, nicht aber die Piloten. Als das Design dann fertig war sagten die: "Da steigen wir nicht ein. Was ist wenn die Raketen alle sind?" Darauf rollten die Entwickler mit den Augen, da die Piloten einfach nicht in der Lage waren die Fakten zu verstehen oder dies nicht wollten. Die Politik erzwang dann den Einbau einer Bordkanone. Das ist einer der Gründe warum der "Jäger 90" etwas länger dauerte. So eine Bordkanone ist so schwer und groß, dass man das komplette Flugzeug umkonstruieren muss.
Moderne Flugzeuge sind immer wendiger, d.h. auch der potentielle Gegner. Die Flieger können so eng Kreisen, dass man bildlich gesprochen um die Ecke schießen muss um sie zu treffen. Das kann man aber nur mit Raketen.

Ich habe die Erfahrung gemacht, dass die Luftfahrt für die meisten Leute zu kompliziert um sie im Detail zu verstehen, was normal ist. Anstatt dann aber auf die Ingenieure zu vertrauen wird vieles nach Bauchgefühl bewertet und entschieden. Das betrifft sowohl die Politik als auch die Piloten. Daher ist die Luftfahrt auch in der Entwicklung erzkonservativ. "Man darf die Leute nicht zu sehr erschrecken, die haben eh schon Angst wenn sie sich in die Luft begeben." Bestes Beispiel: Pilotenfreies Flugzeug. Gute Idee, absolut sinnvoll, aber keiner würde einsteigen. Mal schauen wie sich die fahrerlosen Autos entwickelt. Es würde viel Menschenleben retten, aber die meisten Menschen werden es nicht nutzen wollen, da sie ja nicht eingreifen können wenn was schief geht. Man denke nur an die Diskussionen als ABS eingeführt wurde. "Oje der Bremsweg wird länger, wir werden alle sterben."

July 01, 2015 10:01 AM

StackOverflow

Prevent simultaneous deploys with Ansible

Anyone on my team can SSH into our special deploy server, and from there run an Ansible playbook to push new code to machines.

We're worried about what will happen if two people try to do deploys simultaneously. We'd like to make it so that the playbook will fail if anyone else is currently running it.

Any suggestions for how to do this? The standard solution is to use a pid file, but Ansible does not have built-in support for these.

by James Koppel at July 01, 2015 09:57 AM

CompsciOverflow

Graph Families that are easy to color

What are the non-trivial graph families that have a known chromatic number, or an easy way (polynomial-time algorithm) to compute the latter.

Examples would be:

  • Kneser graphs
  • Chordal graphs

Do you know any other families?

Motivation:

We are looking for interesting classes on which we can test a proper coloring heuristic. This is why we wanted some graphs that are not so easy to color but in the same time have a known chromatic number so we can evaluate the quality of the heuristic.

by Issam T. at July 01, 2015 09:35 AM

StackOverflow

How do you print the select statements for the following Slick queries?

I would like to find out which of the following queries would be the most efficient for getting a row count on a table, so I'm trying to print out the select statements. I know you can add.selectStatement to a Queryable but don't know if this tells me the complete truth because I'll have to remove the result generating code, e.g. .list.length and replace it with .selectStatement. Slick probably picks up that you are looking for the length and optimises further, so I want to see the select statement for the whole query, including the SQL that will be generated because of the .list.length, or .count).first

Query(MyTable).list.length

(for{mt <- MyTable} yield mt).list.length

(for{mt <- MyTable} yield mt.count).first

by JacobusR at July 01, 2015 09:35 AM

Planet Theory

TR15-109 | An exponential lower bound for homogeneous depth-5 circuits over finite fields | Mrinal Kumar, Ramprasad Saptharishi

In this paper, we show exponential lower bounds for the class of homogeneous depth-$5$ circuits over all small finite fields. More formally, we show that there is an explicit family $\{P_d : d \in N\}$ of polynomials in $VNP$, where $P_d$ is of degree $d$ in $n = d^{O(1)}$ variables, such that over all finite fields $F_q$, any homogeneous depth-$5$ circuit which computes $P_d$ must have size at least $\exp(\Omega_q(\sqrt{d}))$. To the best of our knowledge, this is the first super-polynomial lower bound for this class for any field $F_q \neq F_2$. Our proof builds up on the ideas developed on the way to proving lower bounds for homogeneous depth-$4$ circuits [GKKS14, FLMS14, KLSS14, KS14b] and for non-homogeneous depth-$3$ circuits over finite fields [GK98, GR00]. Our key insight is to look at the space of shifted partial derivatives of a polynomial as a space of functions from $F_q^n \rightarrow F_q$ as opposed to looking at them as a space of formal polynomials and builds over a tighter analysis of the lower bound of Kumar and Saraf [KS14b].

July 01, 2015 09:19 AM

Fefe

Youtube muss keine GEMA-Gebühren für von Benutzern ...

Youtube muss keine GEMA-Gebühren für von Benutzern hochgeladene Videos zahlen. GEMA hatte geklagt und ist vor dem LG München gescheitert.
Das Gericht habe bestätigt, dass YouTube als Hoster gelte und damit für eventuelle Urheberrechtsverstöße von Nutzern nicht haftbar zu machen sei, teilte YouTube-Mutter Google dazu mit. Die GEMA war am Dienstagabend für eine Stellungnahme nicht zu erreichen.

July 01, 2015 09:01 AM

StackOverflow

Marshalling of `Map`s with Spray

I have been trying to marshal a bunch of Maps but I get error. Here is the definitions:

import spray.httpx.SprayJsonSupport._
import spray.json.DefaultJsonProtocol._

import scala.collection.JavaConverters._


case class SchemaMap( schemaMap: scala.collection.immutable.Map[String, Integer] ) /**  FIRST ERROR IS HERE!! **/ 
case class Profile( counts: scala.collection.immutable.Map[String, SchemaMap] )
case class Profiles( profiles: scala.collection.immutable.Seq[Profile] )

object Profiles {
  implicit val schemaMapMarshall = jsonFormat1(SchemaMap.apply)
  implicit val profileMarshall = jsonFormat1(Profile.apply)
  implicit val profilesMarshall = jsonFormat1(Profiles.apply)

  def convertAProfileToScala(javaProfile: edu.illinois.cs.cogcomp.profilerNew.model.Profile): Profile = {
    val out = javaProfile.getAllSchema.asScala.map{item =>
      (item._1, SchemaMap(item._2.asScala.toMap))
    }.toMap
    Profile(out)
  }

  def convert[Profiles](sq: collection.mutable.Seq[Profiles]): collection.immutable.Seq[Profiles] =
    collection.immutable.Seq[Profiles](sq:_*)

  def convertProfilesToScala(javaProfiles: java.util.List[edu.illinois.cs.cogcomp.profilerNew.model.Profile]): Profiles = {
    val a: collection.mutable.Seq[Profile]  = javaProfiles.asScala.map{ convertAProfileToScala }
    val immutableSeq = collection.immutable.Seq[Profile](a:_*)
    Profiles(immutableSeq)
  }
}

Here is the error:

Error:(16, 47) could not find implicit value for evidence parameter of type spray.json.DefaultJsonProtocol.JF[scala.collection.immutable.Map[String,Integer]]
  implicit val schemaMapMarshall = jsonFormat1(SchemaMap.apply)
                                              ^
Error:(16, 47) not enough arguments for method jsonFormat1: (implicit evidence$1: spray.json.DefaultJsonProtocol.JF[scala.collection.immutable.Map[String,Integer]], implicit evidence$2: ClassManifest[org.allenai.example.webapp.SchemaMap])spray.json.RootJsonFormat[org.allenai.example.webapp.SchemaMap].
Unspecified value parameters evidence$1, evidence$2.
  implicit val schemaMapMarshall = jsonFormat1(SchemaMap.apply)
                                              ^
Error:(17, 45) could not find implicit value for evidence parameter of type spray.json.DefaultJsonProtocol.JF[scala.collection.immutable.Map[String,org.allenai.example.webapp.SchemaMap]]
  implicit val profileMarshall = jsonFormat1(Profile.apply)
                                            ^
Error:(17, 45) not enough arguments for method jsonFormat1: (implicit evidence$1: spray.json.DefaultJsonProtocol.JF[scala.collection.immutable.Map[String,org.allenai.example.webapp.SchemaMap]], implicit evidence$2: ClassManifest[org.allenai.example.webapp.Profile])spray.json.RootJsonFormat[org.allenai.example.webapp.Profile].
Unspecified value parameters evidence$1, evidence$2.
  implicit val profileMarshall = jsonFormat1(Profile.apply)
                                            ^
Error:(18, 46) could not find implicit value for evidence parameter of type spray.json.DefaultJsonProtocol.JF[scala.collection.immutable.Seq[org.allenai.example.webapp.Profile]]
  implicit val profilesMarshall = jsonFormat1(Profiles.apply)
                                             ^
Error:(18, 46) not enough arguments for method jsonFormat1: (implicit evidence$1: spray.json.DefaultJsonProtocol.JF[scala.collection.immutable.Seq[org.allenai.example.webapp.Profile]], implicit evidence$2: ClassManifest[org.allenai.example.webapp.Profiles])spray.json.RootJsonFormat[org.allenai.example.webapp.Profiles].
Unspecified value parameters evidence$1, evidence$2.
  implicit val profilesMarshall = jsonFormat1(Profiles.apply)

                                         ^

NOTE: there is a similar post here, but the answer didn't help here. As you can see I already have the imports suggested in its answer and have all the variables immutable.

by Daniel at July 01, 2015 08:55 AM

CompsciOverflow

Shortest Path problem(Single Source&Destination)

Given: A completely connected directed acyclic graph.

What would be the most efficient(Least Time complexity) way to find a shortest path among a very large number of nodes?

Constraint:
1)The result should be optimal.
2) The cost between any two connected nodes is computed in time complexity of O(layer-no). For, e.g., if we consider the graph to be layered(Since its a DAG) then the cost between Layer 'K-1' & 'K' can be computed in a time complexity of O(K)

Known Solutions:
1) A* - But heavily dependent on the heuristic function
2) Dynamic programming - requires to compute the cost of path for all possible path.

So, Is there any way to compute it more efficiently or with similar time complexity?

by skn at July 01, 2015 08:53 AM

scale-free networks and adjacency matrix

Given a distribution over graphs with $n$ nodes having the "scale-free" property, I would like to compute for a pair of vertices $(a,b)$ the probability that they are connected (or more precisely the probability that there is a directed edge from $a$ to $b$).

Is it possible to have such probability distribution ?

Note that the case without the "scale-free" property is not a problem. I can define the probability of having an edge from $a$ to $b$ as $P((a,b))=1/2$ for all $a, b$, $a \neq b$ in the graph.

Thank you.

by user7060 at July 01, 2015 08:51 AM

QuantOverflow

List of momentum indicators

Is there a definite list of momentum indicators? A quick search on Google did not yield much, so I thought to ask this here.

by pyCthon at July 01, 2015 08:48 AM

TheoryOverflow

The Overfull conjecture in graph theory and $coNP$

I am not good at complexity, but got a possible relation between a plausible conjecture in graph theory and $coNP$.

Graph $G$ is Class 1 if it can be edge colored with $\Delta(G)$ colors, otherwise it is Class 2 and can be edge colored with $\Delta(G) + 1$ colors.

The Overfull conjecture (OC) asserts

A graph G with $\Delta (G) \geq n/3$ is class 2 if and only if it has an overfull subgraph $S$ such that $\displaystyle \Delta (G) = \Delta (S)$.

Assume OC and $\Delta (G) \geq n/3$.

This means that we have short certificate if $G$ is Class 1 or Class 2.

For Class 1 the certificate is $\Delta(G)$ edge coloring finding it is in $NP$.

For Class 2 the certificate is overfull subgraph $S$ such that $\displaystyle \Delta (G) = \Delta (S)$ and finding it is in $NP$.

This means there are no $coNP$-hard problems in this case.

There is a reduction from SAT to edge coloring $3$-regular graphs. Encoding unsatisfiable CNF to edge coloring is UNSAT and UNSAT is in $coNP$.

Question: Does the overfull conjecture and reduction from SAT to edge coloring $G$ with $\Delta (G) \geq n/3$ imply $NP=coNP$?

According to a paper edge coloring is NP-complete (possibly minor abuse of terminology) for $r$-regular graphs for any fixed $r \ge 3$ and it gives reduction of $G$ to $r$-regular $G'$.

Positive answer might help for counterexample to the overfull conjecture.

$\Delta(G)$ is the maximum degree and $n$ is the number of vertices.


EDIT Special cases of OC are proven.

According to Overfull Conjecture for Graphs with High Minimum Degree, Michael Plantholt:

we show that any (not necessarily regular) graph $G$ of even order $n$ that has sufficiently high minimum degree $\delta(G) \ge (\sqrt{7}/3) n$ has chromatic index equal to its maximum degree providing that $G$ does not contain an "overfull" subgraph, that is, a subgraph which trivially forces the chromatic index to be more than the maximum degree. This result thus verifies the Overfull Conjecture for graphs of even order and sufficiently high minimum degree.

EDIT 2 This paper p.2 might be related since it claims unless $NP=ZPP$, one can't approximate $\chi'(L^k(G))$ where $L^k(G)$ is the $k-th$ power of the line graph of $G$. (might be wrong on this since $\chi'$ is either $\Delta$ or $\Delta+1$).

by joro at July 01, 2015 08:37 AM

CompsciOverflow

Number of integers in a certain range with certain properties

How would one (quickly) find the number of integers in the range [1, x] for some x, more than half of its digits being one digit, given that $x$ has at most, say, 19 digits.

Brute force would obviously not work in a reasonable amount of time (where "reasonable" means roughly $10^9$ seconds)

by user2472071 at July 01, 2015 08:37 AM

StackOverflow

FreeBSD tacacs+ client

i've been trying to run a tacacs+ client on FreeBSD 9.2 but it doesn't work. the server is on windows using tacacs.net. i know the server is working because i can make the client work on a cisco router. but i can't get it to work on FreeBSD.

this is my /etc/pam.d/tacacs file:

auth        sufficient    /usr/lib/pam_tacplus.so     debug    server = 10.0.0.9    secret=somesecret
account     sufficient    /usr/lib/pam_tacplus.so     debug    server = 10.0.0.9    secret=somesecret    protocol=login
session     sufficient    /usr/lib/pam_tacplus.so     debug    server = 10.0.0.9    secret=somesecret    protocol=login

and /etc/pam.d/login:

auth        include        tacacs
account        include        tacacs
session        include        tacacs

and /etc/tacplus.conf:

10.0.0.9        "somesecret"        15

the problem is that there aren't any good tutorials on how to do this and all i did was based on some forums and i'm not sure if they are correct or not.

would be thankful if you could help me. thanks,

by ipinlnd at July 01, 2015 08:34 AM

Planet Theory

Christos on the Greek Vote

I was sitting by the lake in Lausanne, winds and waves lapping at the sands, humanity in slim clothes, wine and meat on our plates, when I leaned over and asked Christos Papadimitriou, " Christos, When are you going to write something about the Greek crisis?", and he said in a way we have come to expect from him unfailingly, "I already did". Enjoy (use Google translate)!

by metoo (noreply@blogger.com) at July 01, 2015 08:19 AM

StackOverflow

AWS S3 Java SDK: RequestClientOptions.setReadLimit

If we consider this S3 upload code

val tm: TransferManager = ???
val putRequest = new PutObjectRequest(bucketName, keyName, inputStream, metaData)
putRequest.setStorageClass(storageClass)
putRequest.getRequestClientOptions.setReadLimit(100000)
tm.upload(putRequest)

What is the use of the setReadLimit method? The AWS SDK Javadoc contains the following description:

Sets the optional mark-and-reset read limit used for signing and retry purposes. See Also: InputStream.mark(int)

Is my assumption correct in that it is to provide some kind of "checkpointing", such that if the network fails in the middle of an upload process, the API will (internally) perform a retry from the last "marked" position instead of from the beginning of the file?

by lolski at July 01, 2015 08:18 AM

TheoryOverflow

Finding a random regular graph with degree d

I'm trying to find undirected random graphs $G(V,E)$ with $|V|$ = $d^2$ for $d \in \mathbb{N}$ such that $\forall v \in V: deg(v) = d$.

For $d \in 2\mathbb{N} +1$ this trivially is impossible as no such graph exists: The number of incidences (connections between vertices and edges) is given by $|V|\cdot d = d^3 = 8k^3 + 12k^2 + 6k + 1$ (for some $k$). As the number of incidences is always double the number of edges $|E| = d^3/2$ is a contradiction.

This argument however, doesn't work for $d \in 2\mathbb{N}$.

My first guess was just constructing a random graph would do, however, this can get stuck in a local maximum. For instance in $d = 2$:

+---+    example for
|  /     an incomplete
| /      graph that
|/       cannot be
+   +    completed

A similar example can be constructed for $d = 4$ leaving up to two unconnectable vertices (essentially by using a 4-HyperCube).

I strongly suspect that for each $d$ the number of valid graphs significantly outweigh the number of incomplete graphs, but I would like to know how likely it is to end up with an incomplete graph. And if there is a better way to find these graphs than the random algorithm above (which could perhaps be fixed by breaking apart incomplete graphs, but that would not be guaranteed to terminate).

by bitmask at July 01, 2015 08:18 AM

QuantOverflow

good R package for vectorized option pricing

I am using for now the package fOptions but it doesn't allow for vectorized computation of black76 prices and delta. Which package can be used to do that?

As noted by @Richard, I could use lapply, but it is actually looping in R, which is slow (at least too slow for me). I am looking for a package that has a compiled loop, ie that provides a native vectorized function.

by RockScience at July 01, 2015 08:18 AM

StackOverflow

Scala rewriting type parameter of sub type in F-bounded polymorphism

I am trying to create a trait Entity which enforces its sub types to have 2 states: Transient and Persistent

trait EntityState
trait Transient extends EntityState
trait Persistent extends EntityState
trait Entity[State <: EntityState]

For example, a sub class, says class Post[State <: EntityState] extends Entity[State], can be instantiated either as new Post[Persistent] or as new Post[Transient].

Next, I am adding some methods to the trait Entity that can be called depending on its State:

trait Entity[State <: EntityState] {
    def id(implicit ev: State <:< Persistent): Long
    def persist(implicit ev: State <:< Transient): Entity[Persistent]
}

To explain, for any class that extends Entity, the method id can be called only when the class is of state Persistent (i.e. it has been saved to the database and already assigned an autogenerated id).

On the other hand, the method persist can be called only when the class is Transient (not yet saved to the database). The method persist is meant to save an instance of the caller class to the database and return the Persistent version of the class.

Now, the problem is I would like that the return type of persist be that of the caller class. For example, if I call persist on an instance of class Post[Transient], it should return Post[Persistent] instead of Entity[Persistent].

I searched around and found something called F-Bounded Polymorphism. I am trying many ways to adapt it to solve my problem but still not works. Here is what I did:

First try:

trait Entity[State <: EntityState, Self[_] <: Entity[State,Self]] {
    def id(implicit ev: State <:< Persistent): Long
    def persist(implicit ev: State <:< Transient): Self[Persistent]
}

and

class Post[State <: EntityState] extends Entity[State, ({type λ[B] == Post[State]})#λ] {

    def persist(implicit ev: <:<[State, Transient]): Post[State] = {
        ???
    }
}

In the class Post above, I use auto-completion of Eclipse to generate the implementation of the method persist and found that its return type is still incorrect.

Second try:

class Post[State <: EntityState] extends Entity[State, Post] {

   def persist(implicit ev: <:<[State, Transient]): Post[Persistent] = {
       ???
   }
}

with this, it seems correct, except it has a compilation error:

[error] D:\playspace\myblog\app\models\post\Post.scala:14: kinds of the type arguments (State,models.post.Post) do not conform to the expected kinds of the type parameters (type State,type Self) in trait Entity.
[error] models.post.Post's type parameters do not match type Self's expected parameters:
[error] type State's bounds <: common.models.EntityState are stricter than type _'s declared bounds >: Nothing <: Any
[error] trait Post[State <: EntityState] extends Entity[State, Post] {

by asinkxcoswt at July 01, 2015 08:11 AM

Fefe

Biometrie im Einzelhandel ist eine der vielen, vielen ...

Biometrie im Einzelhandel ist eine der vielen, vielen Erfolgsgeschichten von Selbstregulierung der Industrie.
On June 16, consumer privacy advocates walked out of talks to set voluntary rules for companies that use facial recognition technology. They explained that they were withdrawing from the talks because industry would not agree to critical privacy protections.

July 01, 2015 08:01 AM

F-35 ist in einem Dogfight selbst einem F-16 deutlich ...

F-35 ist in einem Dogfight selbst einem F-16 deutlich unterlegen. Das passiert, wenn man versucht, alle glücklich zu machen. Dann ist man in keiner Disziplin besonders gut.

July 01, 2015 08:01 AM

StackOverflow

How to convert unix timestamp to date in Spark

I have a data frame with a column of unix timestamp(eg.1435655706000), and I want to convert it to data with format 'yyyy-MM-DD', I've tried nscala-time but it doesn't work.

    val time_col = sqlc.sql("select ts from mr").map(_(0).toString.toDateTime)
    time_col.collect().foreach(println)

and I got error: java.lang.IllegalArgumentException: Invalid format: "1435655706000" is malformed at "6000"

by user2681194 at July 01, 2015 07:57 AM

How can I use clj-http in riemann.config

I use riemann and now I write my riemann.config.

I want to use clj-http post all events from riemann stream to my web server. But I don't know how to import clj-http from riemann.jar.

I code (:use clj-http.client) or (:require [clj-http.client :as client]) in riemann.config but got error:

java.lang.ClassNotFoundException: clj-http.client

Could anyone help me ?

by keroro520 at July 01, 2015 07:54 AM

Deterministic topological order in scala graph

I'm using scala-graph library to build directional graph and to retrieve its nodes in topological order. Since there can be many possibilities for a topological order of a graph, I need to have deterministic result for topological order for graphs that are equal and built in the same way.

This small app highlights the problem

import scalax.collection.Graph
import scalax.collection.GraphEdge.DiEdge
import scalax.collection.GraphPredef._

object MainApp extends App {

  // Creates new graph for every call
  // val is not an option
  def graph: Graph[String, DiEdge] = Graph(
    "A" ~> "B",
    "A" ~> "C",
    "A" ~> "D",
    "B" ~> "E",
    "B" ~> "F",
    "C" ~> "G",
    "C" ~> "H",
    "D" ~> "F",
    "D" ~> "G"
  )

  val results = 1 to 20 map { _ =>
    graph.topologicalSort.mkString("")
  }

  println(results.tail.forall(_ == results.head))
}

This app prints false.

Is there a way to build deterministic topological sort of a graph using api of scala-graph library? Writing an algorithm from scratch of doing so would be my last option.

by Vanya Stanislavciuc at July 01, 2015 07:43 AM

/r/netsec

Planet Clojure

Seven specialty Emacs settings with big payoffs

Let's skip the bread and butter Emacs modes and focus on settings that are hard to find, but which once activated provide a big payoff.

For reference, my complete Emacs settings are defined in init.el and lisp/packages.el


1. Monochrome rainbows are the best way to reveal unbalanced delimiters

I rely and paredit and formatting to keep my parenthesis honest, and for the most part that works out great. Occasionally I need to go outside the box. Emacs defaults are terrible for finding unbalanced forms when things go wrong. This setting makes it obvious that there is an error when I have fallen out with my grouping delimiters. See the red highlighting in this example:





The trick is to not use different colored delimiters! The reason I need the rainbow delimiters package is only to highlight unbalanced delimiters, which it does quickly and accurately. For those cases where I really want to differentiate a group, placing the cursor on the delimiter causes Emacs to highlight the other delimiter.

To activate this setting:


Install rainbow-delimiters, then
M-x customize-group rainbow-delimiters
Rainbow Delimiters Max Face Count: 1

(add-hook 'prog-mode-hook 'rainbow-delimiters-mode)
(require 'rainbow-delimiters)
(set-face-attribute 'rainbow-delimiters-unmatched-face nil
:foreground 'unspecified
:inherit 'error)

Depending on your theme you may wish to modify the delimiter colors to something relatively passive like grey.


2. Save on focus out

I love writing ClojureScript because it feels alive. When I change some code I instantly see the results reflected in the browser thanks to figwheel or boot-reload. It is even better than a REPL because you don't need to eval. This setting causes my file to be saved when I switch from Emacs to my browser. This is perfect because I don't need to mentally keep track of saving or compiling... I just code, switch to my browser and see the code reloaded. If something goes wrong figwheel notifies me with a popup or boot-reload with a noise, and I can go back to the code or check my build output. This also works great with ring reload middleware.

To activate this setting:


(defun save-all ()
(interactive)
(save-some-buffers t))
(add-hook 'focus-out-hook 'save-all)

You will never forget to save/eval again, and the saves happen at the right time in your workflow.


3. Chords

Emacs key-strokes are pretty gnarly. Especially for a VIM guy like me (Emacs evil-mode is the best VIM). Chords are so much more comfortable! For example I press j and x together instead of M-x. Just be careful when choosing chords that they are combinations you will never type normally. I mainly use chords for switching buffers, navigating windows, opening files, and transposing expressions.

To activate this setting:


Install key-chord

(key-chord-define-global "jx" 'smex)

(You can see more chords in my settings file)


4. Forward and backward transpose

I move code around a lot! This setting allows me to swap values quickly and move forms around. For example if I accidentally put function call arguments in the wrong order, I can swap them around. Or if I define my functions out of order, I can swap the entire form up or down. I just put my cursor at the start of the form I want to move, and press 'tk' to move it forward/down, or 'tj' to move it backward/up. Joel came up with this trick, and has lots of other Emacs foo up his sleeve (check out his matilde repo).

To activate this setting:


(defun noprompt/forward-transpose-sexps ()
(interactive)
(paredit-forward)
(transpose-sexps 1)
(paredit-backward))

(defun noprompt/backward-transpose-sexps ()
(interactive)
(transpose-sexps 1)
(paredit-backward)
(paredit-backward))

(key-chord-define-global "tk" 'noprompt/forward-transpose-sexps)
(key-chord-define-global "tj" 'noprompt/backward-transpose-sexps)


5. Choose a comfortable font

Perhaps this is obvious, but it took me a while to realize what a big impact a comfortable font has when you spend a lot of time in Emacs. The default font is rather small in my opinion, and there are better options. I find Source Code Pro a good choice.

To activate this setting:


(set-frame-font "Source Code Pro-16" nil t)


6. Send expressions to the REPL buffer

For Clojure coding, cider is great. But weirdly it doesn't have a way to send code the the REPL buffer... instead you just evaluate things and see the output. I really like seeing the code in the REPL buffer, and for pair programming it is essential for showing your pair exactly what you are doing (they can't see your zany Emacs keystrokes!) This setting binds C-; to send the current form to the REPL buffer and evaluate it.

To activate this setting:


(defun cider-eval-expression-at-point-in-repl ()
(interactive)
(let ((form (cider-defun-at-point)))
;; Strip excess whitespace
(while (string-match "\\`\s+\\|\n+\\'" form)
(setq form (replace-match "" t t form)))
(set-buffer (cider-get-repl-buffer))
(goto-char (point-max))
(insert form)
(cider-repl-return)))

(require 'cider-mode)
(define-key cider-mode-map
(kbd "C-;") 'cider-eval-expression-at-point-in-repl)


7. Prevent annoying "Active processes exist" query when you quit Emacs.

When I exit Emacs, it is often in anger. I never want to keep a process alive. I explicitly want my processes and Emac to stop. So this prompt is infuriating. Especially when there are multiple processes. Emacs, just stop already!

To activate this setting:


(require 'cl)
(defadvice save-buffers-kill-emacs (around no-query-kill-emacs activate)
(flet ((process-list ())) ad-do-it))


Final comments

These settings improved my workflow dramatically. Drop me an email or comment if you have any trouble with them. I put this list together after being inspired by Jake's blog. He has lots of great Emacs/workflow tips that you might like if Emacs is your editor of choice.

by timothypratley (noreply@blogger.com) at July 01, 2015 07:18 AM

Planet Theory

A note on the large spectrum and generalized Riesz products

I wrote a short note entitled Covering the large spectrum and generalized Riesz products that simplifies and generalizes the approach of the first few posts on Chang’s Lemma and Bloom’s variant.

The approximation statement is made in the context of general probability measures on a finite set (though it should extend at least to the compact case with no issues). The algebraic structure only comes into play when the spectral covering statements are deduced (easily) from the general approximation theorem. The proofs are also done in the general setting of finite abelian groups.

Comments are encouraged, especially about references I may have missed.


by James at July 01, 2015 07:13 AM

StackOverflow

Is there anyway to install scala without sbt?

I have a fedora system. I need to install scala. I have searched online and it says to install sbt. I don't want to install sbt. Is there anyway to install just scala. Like is there a command like sudo yum install scala that will solve my problem?

by eddard.stark at July 01, 2015 07:06 AM

java.lang.ClassNotFoundException: Class scala.runtime.Nothing when running the scoobi WordCount example

I'm trying to run the word count example from the Quick start page

import com.nicta.scoobi.Scoobi._
import Reduction._

object WordCount extends ScoobiApp {
  def run() {
    val lines = fromTextFile(args(0))

    val counts = lines.mapFlatten(_.split(" "))
      .map(word => (word, 1))
      .groupByKey
      .combine(Sum.int)
    counts.toTextFile(args(1)).persist
  }
}

It works fine when I use in memory mode, but when trying local mode (or cluster mode) I fail with the errors:

[WARN] LocalJobRunner - job_local_0001 <java.lang.RuntimeException: java.lang.ClassNotFoundException: Class scala.runtime.Nothing$ not found>java.lang.RuntimeException: java.lang.ClassNotFoundException: Class scala.runtime.Nothing$ not found
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1439)
        at com.nicta.scoobi.impl.mapreducer.ChannelOutputFormat.com$nicta$scoobi$impl$mapreducer$ChannelOutputFormat$$mkTaskContext$1(ChannelOutputFormat.scala:63)
        at com.nicta.scoobi.impl.mapreducer.ChannelOutputFormat$$anonfun$getContext$1.apply(ChannelOutputFormat.scala:75)
        at com.nicta.scoobi.impl.mapreducer.ChannelOutputFormat$$anonfun$getContext$1.apply(ChannelOutputFormat.scala:75)
        at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:189)
        at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
        at com.nicta.scoobi.impl.mapreducer.ChannelOutputFormat.getContext(ChannelOutputFormat.scala:75)
        at com.nicta.scoobi.impl.mapreducer.ChannelOutputFormat.write(ChannelOutputFormat.scala:43)
        at com.nicta.scoobi.impl.plan.mscr.MscrOutputChannel$$anon$5$$anonfun$write$1.apply(OutputChannel.scala:137)
        at com.nicta.scoobi.impl.plan.mscr.MscrOutputChannel$$anon$5$$anonfun$write$1.apply(OutputChannel.scala:135)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at com.nicta.scoobi.impl.plan.mscr.MscrOutputChannel$$anon$5.write(OutputChannel.scala:135)
        at com.nicta.scoobi.impl.plan.mscr.GbkOutputChannel$$anonfun$reduce$1.apply$mcV$sp(OutputChannel.scala:201)
        at com.nicta.scoobi.impl.plan.mscr.GbkOutputChannel$$anonfun$reduce$1.apply(OutputChannel.scala:201)
        at com.nicta.scoobi.impl.plan.mscr.GbkOutputChannel$$anonfun$reduce$1.apply(OutputChannel.scala:201)
        at scala.Option.getOrElse(Option.scala:120)
        at com.nicta.scoobi.impl.plan.mscr.GbkOutputChannel.reduce(OutputChannel.scala:200)
        at com.nicta.scoobi.impl.mapreducer.MscrReducer$$anonfun$reduce$1.apply(MscrReducer.scala:55)
        at com.nicta.scoobi.impl.mapreducer.MscrReducer$$anonfun$reduce$1.apply(MscrReducer.scala:52)
        at scala.Option.foreach(Option.scala:236)
        at com.nicta.scoobi.impl.mapreducer.MscrReducer.reduce(MscrReducer.scala:52)
        at com.nicta.scoobi.impl.mapreducer.MscrReducer.reduce(MscrReducer.scala:33)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:164)
        at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:572)
        at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:414)
        at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:309)
Caused by: java.lang.ClassNotFoundException: Class scala.runtime.Nothing$ not found
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1350)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1437)
        ... 25 more

[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/3757970833182018747_-1642337927_156373685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/dist-objs/scoobi.combiners-step1 of 1
[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/1307074498433974065_910223079_156373685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/dist-objs/scoobi.mappers-step1 of 1
[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/-624792843022440048_-470268278_156372685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/dist-objs/scoobi.metadata.TG23
[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/-7527273518266336656_-470264434_156372685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/dist-objs/scoobi.metadata.TK23
[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/-7162952586058180219_-470259629_156372685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/dist-objs/scoobi.metadata.TP23
[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/-1228551315878554095_-470253863_156372685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/dist-objs/scoobi.metadata.TV23
[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/6598684265640022340_1943382592_156373685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/dist-objs/scoobi.reducers-step1 of 1
[INFO] TrackerDistributedCacheManager - Deleted path /tmp/scoobi-root/WordCount$-1124-035523--1298047809/step1 of 1/archive/1699308645513763631_1905624154_156371685/file/tmp/scoobi-root/WordCount$-1124-035523--1298047809/env/a88809af-334b-499e-bafc-1a2ebeffdfbd
[INFO] MapReduceJob - Map 100%    Reduce   0%
[error] (run-main) com.nicta.scoobi.impl.exec.JobExecException: MapReduce job 'job_local_0001' failed! Please see http://localhost:8080/ for more info.
com.nicta.scoobi.impl.exec.JobExecException: MapReduce job 'job_local_0001' failed! Please see http://localhost:8080/ for more info.
        at com.nicta.scoobi.impl.exec.MapReduceJob.report(MapReduceJob.scala:80)
        at com.nicta.scoobi.impl.exec.HadoopMode$Execution$$anonfun$reportMscr$1.apply(HadoopMode.scala:157)
        at com.nicta.scoobi.impl.exec.HadoopMode$Execution$$anonfun$reportMscr$1.apply(HadoopMode.scala:154)
        at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:54)
        at scala.Function2$$anonfun$tupled$1.apply(Function2.scala:53)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at com.nicta.scoobi.impl.exec.HadoopMode$Execution.runMscrs(HadoopMode.scala:133)
        at com.nicta.scoobi.impl.exec.HadoopMode$Execution.execute(HadoopMode.scala:115)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$executeLayer$1.apply(HadoopMode.scala:105)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$executeLayer$1.apply(HadoopMode.scala:104)
        at org.kiama.attribution.AttributionCore$CachedAttribute.apply(AttributionCore.scala:61)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
        at scala.collection.AbstractTraversable.map(Traversable.scala:105)
        at com.nicta.scoobi.impl.exec.HadoopMode.com$nicta$scoobi$impl$exec$HadoopMode$$executeLayers$1(HadoopMode.scala:68)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$executeNode$1.apply(HadoopMode.scala:91)
        at com.nicta.scoobi.impl.exec.HadoopMode$$anonfun$executeNode$1.apply(HadoopMode.scala:84)
        at org.kiama.attribution.AttributionCore$CachedAttribute.apply(AttributionCore.scala:61)
        at scalaz.syntax.IdOps$class.$bar$greater(IdOps.scala:15)
        at scalaz.syntax.ToIdOps$$anon$1.$bar$greater(IdOps.scala:78)
        at com.nicta.scoobi.impl.exec.HadoopMode.execute(HadoopMode.scala:52)
        at com.nicta.scoobi.impl.exec.HadoopMode.execute(HadoopMode.scala:48)
        at com.nicta.scoobi.impl.Persister.persist(Persister.scala:44)
        at com.nicta.scoobi.impl.ScoobiConfigurationImpl.persist(ScoobiConfigurationImpl.scala:355)
        at com.nicta.scoobi.application.Persist$class.persist(Persist.scala:33)
        at p.WordCount$.persist(scoobi-test.scala:6)
        at com.nicta.scoobi.application.Persist$PersistableList.persist(Persist.scala:151)
        at p.WordCount$.run(scoobi-test.scala:14)
        at com.nicta.scoobi.application.ScoobiApp$$anonfun$main$1.apply$mcV$sp(ScoobiApp.scala:80)
        at com.nicta.scoobi.application.ScoobiApp$$anonfun$main$1.apply(ScoobiApp.scala:75)
        at com.nicta.scoobi.application.ScoobiApp$$anonfun$main$1.apply(ScoobiApp.scala:75)
        at com.nicta.scoobi.application.LocalHadoop$class.runOnLocal(LocalHadoop.scala:41)
        at p.WordCount$.runOnLocal(scoobi-test.scala:6)
        at com.nicta.scoobi.application.LocalHadoop$class.executeOnLocal(LocalHadoop.scala:35)
        at p.WordCount$.executeOnLocal(scoobi-test.scala:6)
        at com.nicta.scoobi.application.LocalHadoop$$anonfun$onLocal$1.apply(LocalHadoop.scala:29)
        at com.nicta.scoobi.application.InMemoryHadoop$class.withTimer(InMemory.scala:71)
        at p.WordCount$.withTimer(scoobi-test.scala:6)
        at com.nicta.scoobi.application.InMemoryHadoop$class.showTime(InMemory.scala:79)
        at p.WordCount$.showTime(scoobi-test.scala:6)
        at com.nicta.scoobi.application.LocalHadoop$class.onLocal(LocalHadoop.scala:29)
        at p.WordCount$.onLocal(scoobi-test.scala:6)
        at com.nicta.scoobi.application.Hadoop$class.onHadoop(Hadoop.scala:60)
        at p.WordCount$.onHadoop(scoobi-test.scala:6)
        at com.nicta.scoobi.application.ScoobiApp$class.main(ScoobiApp.scala:75)
        at p.WordCount$.main(scoobi-test.scala:6)
        at p.WordCount.main(scoobi-test.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:616)
[trace] Stack trace suppressed: run last compile:runMain for the full output.
[INFO] Task - Communication exception: java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at org.apache.hadoop.mapred.Task$TaskReporter.run(Task.java:648)
        at java.lang.Thread.run(Thread.java:679)

java.lang.RuntimeException: Nonzero exit code: 1
        at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:runMain for the full output.
[error] (compile:runMain) Nonzero exit code: 1
[error] Total time: 9 s, completed Nov 24, 2013 3:55:30 AM

running the very similar example from github (https://github.com/NICTA/scoobi/tree/SCOOBI-0.7.3/examples/wordCount) does work.

any ideas?

EDIT I ran the sample according to the explanations in scoobi quick start running the sample is done using sbt commands:

sbt compile
sbt "run-main mypackage.myapp.WordCount input-files output"

There is no reference regarding how or where to supply parameters such as the location of external jars.

by Ophir Yoktan at July 01, 2015 07:03 AM

Fefe

DataTau

StackOverflow

Using ansible to manage disk space

Simple ask: I want to delete some files if partition utilization goes over a certain percentage.

I have access to "size_total" and "size_available" via "ansible_mounts". i.e.:

ansible myhost -m setup -a 'filter=ansible_mounts'
myhost | success >> {
"ansible_facts": {
    "ansible_mounts": [
        {
            "device": "/dev/mapper/RootVolGroup00-lv_root", 
            "fstype": "ext4", 
            "mount": "/", 
            "options": "rw", 
            "size_available": 5033046016, 
            "size_total": 8455118848
        }, 

How do I access those values, and how would I perform actions conditionally based on them using Ansible?

by thermans at July 01, 2015 06:49 AM

Functional Programming - Lots of emphasis on recursion, why?

I am getting introduced to Functional Programming [FP] (using Scala). One thing that is coming out from my initial learnings is that FPs rely heavily on recursion. And also it seems like, in pure FPs the only way to do iterative stuff is by writing recursive functions.

And because of the heavy usage of recursion seems the next thing that FPs had to worry about were StackoverflowExceptions typically due to long winding recursive calls. This was tackled by introducing some optimizations (tail recursion related optimizations in maintenance of stackframes and @tailrec annotation from Scala v2.8 onwards)

Can someone please enlighten me why recursion is so important to functional programming paradigm? Is there something in the specifications of functional programming languages which gets "violated" if we do stuff iteratively? If yes, then I am keen to know that as well.

PS: Note that I am newbie to functional programming so feel free to point me to existing resources if they explain/answer my question. Also I do understand that Scala in particular provides support for doing iterative stuff as well.

by peakit at July 01, 2015 06:15 AM

QuantOverflow

Training set of tick-by-tick data?

I'm looking to find a free source of tick by tick data (<1sec) for training purposes. It doesn't need to be longer than a day, and I don't care what instrument, or exchange, or time it is. I just want real numbers with a high frequency.

Is such a data set freely available?

by user107 at July 01, 2015 06:02 AM

DataTau

QuantOverflow

Extended CIR and discretization

Did someone know how to discretize this process efficiently :

$dX(t) = \kappa [\theta(t)-X(t)]dt + \sigma \sqrt{X(t)}dW(t)$

I am looking for something more sophisticated than the trivial Euler Schema :

$X(t_{k+1}) =X(t_{k}) + \kappa[\theta(t_k)-X(t_{k})]\Delta t + \sigma \sqrt{X(t_{k})}\Delta_k W$

Thanks in advance,

by Gauss8 at July 01, 2015 05:59 AM

CompsciOverflow

Do functional algorithms require more memory than imperative algorithms? [on hold]

Let's suppose we are counting words in string. We split it so what we have is an array of strings. I'll use Python as an example.

The imperative approach would as follows:

wordcount = {}
for word in words:
    wordcount[word] += 1

The functional would be:

uniquewords = set(words)
wordcount = [words.count(w) for w in words]

For each word w we are doing a full scan on the words array, while the imperative approach goes over each word just once. Am I right to suppose that the functional way of doing it will consume a lot more resources than the imperative one?

by Rodrigo Stevaux at July 01, 2015 05:58 AM

StackOverflow

Parsing date time information from CSV in Zeppelin and Spark

I'm trying to read CSV file and build up data frame.

The format of CSV like blow. I used ISO8602 date/time format for data/time string representation.

2015-6-29T12:0:0,b82debd63cffb1490f8c9c647ca97845,G1J8RX22EGKP,2015-6-29T12:0:5,2015-6-29T12:0:6,0QA97RAM1GIV,2015-6-29T12:0:10,2015-6-29T12:0:11,2015-6-29T12:0:12,2015-6-29T12:5:42,1
2015-6-29T12:20:0,0d60c871bd9180275f1e4104d4b7ded0,5HNB7QZSUI2C,2015-6-29T12:20:5,2015-6-29T12:20:6,KSL2LB0R6367,2015-6-29T12:20:10,2015-6-29T12:20:11,2015-6-29T12:20:12,2015-6-29T12:25:13,1
......

To load this data, I wrote the scala code in Zeppelin like below

import org.apache.spark.sql.types.DateType
import org.apache.spark.sql.functions._
import org.joda.time.DateTime
import org.joda.time.format.DateTimeFormat
import sys.process._

val logCSV = sc.textFile ("log_table.csv")

case class record(
    callingTime:DateTime, 
    userID:String, 
    CID:String, 
    serverConnectionTime:DateTime, 
    serverResponseTime:DateTime, 
    connectedAgentID:String, 
    beginCallingTime:DateTime, 
    endCallingTime:DateTime, 
    Succeed:Int)


val formatter = DateTimeFormat.forPattern("yyyy-mm-dd'T'kk:mm:ss")

val logTable = logCSV.map(s => s.split(",") ).map(
    s => record(
            formatter.parseDateTime( s(0) ), 
            s(1),
            s(2),
            formatter.parseDateTime( s(3) ), 
            formatter.parseDateTime( s(4) ), 
            s(5),
            formatter.parseDateTime( s(6) ), 
            formatter.parseDateTime( s(7) ),            
            s(8).toInt
        )
).toDF()

The it made error like below. Main issue is DateTime is not serializable.

logCSV: org.apache.spark.rdd.RDD[String] = log_table.csv MapPartitionsRDD[38] at textFile at <console>:169
defined class record
formatter: org.joda.time.format.DateTimeFormatter = org.joda.time.format.DateTimeFormatter@46051d99
org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1623)
    at org.apache.spark.rdd.RDD.map(RDD.scala:286)

Then I wonder how I handle date/time information in Scala. Could you help me?

by Jinho Yoo at July 01, 2015 05:46 AM

History of virtual memory management?

I'm trying to understand what's going on when I call malloc in my C program. I notice that it will use a system call named brk to change the ending address of the heap (on Linux). But, on FreeBSD, it seems to use another system call named mmap to get the memory, and use madvise to free the allocated memory. And the man page of brk on FreeBSD says "The brk() and sbrk() functions are legacy interfaces from before the advent of modern virtual memory management." Can anyone provide the history about this ? or recommend resources ?

by wdv4758h at July 01, 2015 05:39 AM

Get item before filter match

To filter a list of items I use :

  val l : List[String] = List("1","a","2","b")

  l.filter(c => c.contains("b")).foreach(println)

But how can I access the items that occur before the matched item ? So in this case "2" is accessed ?

Update :

 List("1","a","2","b","3","b","4","b")
filtering on "b" returns 
List("2","3","4")
filtering on "a" returns
List("1")

by blue-sky at July 01, 2015 05:38 AM

CompsciOverflow

life.exe 1k file with evolving black and white patterns [on hold]

Edited: Thank you for the feedback. I will try to rephrase the question to make it more relevant.

Back in the 90's I found a 1k program called "life.exe", which ran for about 2 minutes and featured several black/white patterns that "swirled", "morphed" and transformed as if something was "growing", almost like simulating cellular activity. I later believed it to be varying algorithms depicting the evolution of cellular automata. The interesting thing was the incredibly high resolution of the patterns.

I've since lost the file, but I've been looking for code similar to it, and am curious about what kind of code would generate such a thing. I've been reading Mike's OS page, and am fascinated by how tiny code can generate powerful results.

by Abbas Jaffary at July 01, 2015 05:08 AM

StackOverflow

Scala Rank-1 Polymorphism

The following code snippet is from twitter's Scala school:

Scala has rank-1 polymorphism. Roughly, this means that there are some type concepts you’d like to express in Scala that are “too generic” for the compiler to understand. Suppose you had some function:

def toList[A](a: A) = List(a)

which you wished to use generically:

def foo[A, B](f: A => List[A], b: B) = f(b)

This does not compile, because all type variables have to be fixed at the invocation site. Even if you “nail down” type B,

def foo[A](f: A => List[A], i: Int) = f(i) // Line 1

…you get a type mismatch.

Why would Line 1 fail? The type of B is known. Why should that fail compilation?

by user3102968 at July 01, 2015 04:59 AM

Scala Regex Extractor with OR operator

I have this verbose code that does shortcircuit Regex extraction / matching in Scala. This attempts to match a string with the first Regex, if that doesn't match, it attempts to match the string with the second Regex.

val regex1 : scala.util.matching.Regex = "^a(b)".r
val regex2 : scala.util.matching.Regex = "^c(d)".r

val s = ? 
val extractedGroup1 : Option[String] = s match { case regex1(v) => Some(v) case _ => None }
val extractedGroup2 : Option[String] = s match { case regex2(v) => Some(v) case _ => None}

val extractedValue = extractedGroup1.orElse(extractedGroup2)

Here are the results:

s == "ab" then extractedValue == "b"

s == "cd" then extractedValue == "c"

s == "gg" then extractedValue == None.

My question is how can we combine the two regex into a single regex with the regex or operator, and still use Scala extractors. I tried this, but it's always the None case.

val regex : scala.util.matching.Regex = "^a(b)$ | ^c(d)$".r
val extractedValue: s match { case regex(v) => Some(v) case _ => None }

by Tuan Vo at July 01, 2015 04:43 AM

XKCD

StackOverflow

Scala Use :: to removeDuplicates

I am reading the book programming in Scala from Martin O. and there is one example there to remove duplicates totally confused me:

def removeDuplicates[A](xs: List[A]): List[A] = {
   if (xs.isEmpty) xs
   else
       xs.head :: removeDuplicates(
           xs.tail filter (x => x != xs.head)
       )
}

println(removeDuplicates[String](List("a", "a", "b", "a", "c")))

gives me:

List(a,b,c)

I know that .head will give you the very first element of the List while .tail give you the rest of the List. And I can understand that xs.tail filter (x => x != xs.head) will return a list containing the elements which don't equal to the head.

My Google search leads me to this cons operator however, I am still having a hard time mapping Martin's words to this example. And anyone help me understand how this :: works in this function?

by B.Mr.W. at July 01, 2015 03:43 AM

Lobsters

StackOverflow

Is it a good idea to make Ansible and Rundeck work together, or using either one is enough?

Recently I'm looking at Ansible and want to use it in projects. And also there's another tool Rundeck can be used to do all kinds of Operations works. I have experience with neither tool and this is my current understanding about them:

Similar points

  • Both tools are agent-less and use SSH to execute commands on remote servers

  • Rundeck's main concept is Node, the same as Ansible's inventory, the key idea is to define/manage/group the target servers

  • Rundeck can execute ad-hoc commands on selected nodes, Ansible can also do this very conveniently.
  • Rundeck can define workflow and do the execution on selected nodes, this can be done with Ansible by writing playbook
  • Rundeck can be integrated with CI tool like Jenkins to do deploy work, we can also define a Jenkins job to run ansible-playbook to do the deploy work

Different points

  • Rundeck has the concept of Job, which Ansible does not

  • Rundeck has Job Scheduler, which Ansible can only achieve this with other tools like Jenkins or Cron tasks

  • Rundeck has Web UI by dedault for free, but you have to pay for Ansible Tower

Seems both Ansible and Rundeck can be used to do configuration/management/deployment work, maybe in a different way. So my questions are:

  • Are these two complementary tools or they are designed for different purposes? If they're complementary tools, why is Ansibl only compared to tools like Chef/Puppet/Slat but not with Rundeck? If they're not why they have so many similar functionalities?
  • We're already using Jenkins for CI, to build a Continuous-Delivery pipeline, which tool(Ansible/Rundeck) is a better idea to use to do the deployment?
  • If they can be used together, what's the best practice?

Any suggestions and experience sharing are greatly appreciated. Thanks in advance.

Best Regards.

by seki-shi at July 01, 2015 03:34 AM

Blob or BYTEA from Plain SQL Query in Slick 3.0.0

I am trying to return a BLOB from a Postgres 9.4 database using Slick 3.0.0

My simple attempt is

import slick.driver.PostgresDriver.api._ import slick.jdbc.JdbcBackend.Database import scala.concurrent.Await import scala.concurrent.duration._ import scala.concurrent.ExecutionContext.Implicits.global

object QueryRunner extends  App {
val db = Database.forURL("jdbc:postgresql://localhost:5432/test","test_migration","test_migration",driver = "org.postgresql.Driver")

def selectRegions = sql"Select region_data from test.regions".as[java.sql.Blob]
val result = db.run(selectRegions)
val regionData = Await.result(result,1.seconds)}

That returns me

Error:(16, 65) could not find implicit value for parameter rconv: slick.jdbc.GetResult[java.sql.Blob] def selectRegions = sql"Select region_data from core.regions".as[java.sql.Blob]

I feel as though since Blob and BYTEA are somewhat specialized that I must be missing an import?

by bearrito at July 01, 2015 03:28 AM

Are statistical programming languages like R/SAS considered functional or procedural

I still don't understand the difference after reading this

So, rather than asking what is the difference between functional vs procedural programming, I thought, maybe a language that I am familiar with can serve as an example.

Hence, my questions: Are the languages R/SAS considered procedural or functional?

by Victor at July 01, 2015 03:17 AM

Planet Theory

Quantum query complexity: the other shoe drops

Two weeks ago I blogged about a breakthrough in query complexity: namely, the refutation by Ambainis et al. of a whole slew of conjectures that had stood for decades (and that I mostly believed, and that had helped draw me into theoretical computer science as a teenager) about the largest possible gaps between various complexity measures for total Boolean functions. Specifically, Ambainis et al. built on a recent example of Göös, Pitassi, and Watson to construct bizarre Boolean functions f with, among other things, near-quadratic gaps between D(f) and R0(f) (where D is deterministic query complexity and R0 is zero-error randomized query complexity), near-1.5th-power gaps between R0(f) and R(f) (where R is bounded-error randomized query complexity), and near-4th-power gaps between D(f) and Q(f) (where Q is bounded-error quantum query complexity). See my previous post for more about the definitions of these concepts and the significance of the results (and note also that Mukhopadhyay and Sanyal independently obtained weaker results).

Because my mental world was in such upheaval, in that earlier post I took pains to point out one thing that Ambainis et al. hadn’t done: namely, they still hadn’t shown any super-quadratic separation between R(f) and Q(f), for any total Boolean function f. (Recall that a total Boolean function, f:{0,1}n→{0,1}, is one that’s defined for all 2n possible input strings x∈{0,1}n. Meanwhile, a partial Boolean function is one where there’s some promise on x: for example, that x encodes a periodic sequence. When you phrase them in the query complexity model, Shor’s algorithm and other quantum algorithms achieving exponential speedups work only for partial functions, not for total ones. Indeed, a famous result of Beals et al. from 1998 says that D(f)=O(Q(f)6) for all total functions f.)

So, clinging to a slender reed of sanity, I said it “remains at least a plausible conjecture” that, if you insist on a fair comparison—i.e., bounded-error quantum versus bounded-error randomized—then the biggest speedup quantum algorithms can ever give you over classical ones, for total Boolean functions, is the square-root speedup that Grover’s algorithm easily achieves for the n-bit OR function.

Today, I can proudly report that my PhD student, Shalev Ben-David, has refuted that conjecture as well.  Building on the Göös et al. and Ambainis et al. work, but adding a new twist to it, Shalev has constructed a total Boolean function f such that R(f) grows roughly like Q(f)2.5 (yes, that’s Q(f) to the 2.5th power). Furthermore, if a conjecture that Ambainis and I made in our recent “Forrelation” paper is correct—namely, that a problem called “k-fold Forrelation” has randomized query complexity roughly Ω(n1-1/k)—then one would get nearly a cubic gap between R(f) and Q(f).

The reason I found this question so interesting is that it seemed obvious to me that, to produce a super-quadratic separation between R and Q, one would need a fundamentally new kind of quantum algorithm: one that was unlike Simon’s and Shor’s algorithms in that it worked for total functions, but also unlike Grover’s algorithm in that it didn’t hit some impassable barrier at the square root of the classical running time.

Flummoxing my expectations once again, Shalev produced the super-quadratic separation, but not by designing any new quantum algorithm. Instead, he cleverly engineered a Boolean function for which you can use a combination of Grover’s algorithm and the Forrelation algorithm (or any other quantum algorithm that gives a huge speedup for some partial Boolean function—Forrelation is just the maximal example), to get an overall speedup that’s a little more than quadratic, while still keeping your Boolean function total. I’ll let you read Shalev’s short paper for the details, but briefly, it once again uses the Göös et al. / Ambainis et al. trick of defining a Boolean function that equals 1 if and only if the input string contains some hidden substructure, and the hidden substructure also contains a pointer to a “certificate” that lets you quickly verify that the hidden substructure was indeed there. You can use a super-fast algorithm—let’s say, a quantum algorithm designed for partial functions—to find the hidden substructure assuming it’s there. If you don’t find it, you can simply output 0. But if you do find it (or think you found it), then you can use the certificate, together with Grover’s algorithm, to confirm that you weren’t somehow misled, and that the substructure really was there. This checking step ensures that the function remains total.

Are there further separations to be found this way? Almost certainly! Indeed, Shalev, Robin Kothari, and I have already found some more things (as well as different/simpler proofs of known separations), though nothing quite as exciting as the above.

Update (July 1): Ronald de Wolf points out in the comments that this “trust-but-verify” trick, for designing total Boolean functions with unexpectedly low quantum query complexities, was also used in a recent paper by himself and Ambainis (while Ashley Montanaro points out that a similar trick was used even earlier, in a different context, by Le Gall).  What’s surprising, you might say, is that it took as long as it did for people to realize how many applications this trick has.

Update (July 2): In conversation with Robin Kothari and Cedric Lin, I realized that Shalev’s superquadratic separation between R and Q, combined with a recent result of Lin and Lin, resolves another open problem that had bothered me since 2001 or so. Given a Boolean function f, define the “projective quantum query complexity,” or P(f), to be the minimum number of queries made by a bounded-error quantum algorithm, in which the answer register gets immediately measured after each query. This is a model of quantum algorithms that’s powerful enough to capture (for example) Simon’s and Shor’s algorithms, but not Grover’s algorithm. Indeed, one might wonder whether there’s any total Boolean function for which P(f) is asymptotically smaller than R(f)—that’s the question I wondered about around 2001, and that I discussed with Elham Kashefi. Now, by using an argument based on the “Vaidman bomb,” Lin and Lin recently proved the fascinating result that P(f)=O(Q(f)2) for all functions f, partial or total. But, combining with Shalev’s result that there exists a total f for which R(f)=Ω(Q(f)2.5), we get that there’s a total f for which R(f)=Ω(P(f)1.25). In the other direction, the best I know is that P(f)=Ω(bs(f)) and therefore R(f)=O(P(f)3).

by Scott at July 01, 2015 03:06 AM

QuantOverflow

Comparing a money-weighted return of my own portfolio with a benchmark ETF/other portfolio that is subject to the same cashflows

I am able to calculate the money-weighted return (XIRR equivalent in Excel) of my portfolio. Whilst I can compare this with ‘headline’ returns of ETF’s, Mutual Funds etc, I want to isolate the timing of my cashflows from my choice of investments. In other words, I want to see if I would have been better making my investment timing decisions and investing in the benchmark rather than investing in my own choice.

The difficulty seems to be that if I apply the same cashflows to benchmark the benchmark itself can be very misrepresentative – for example, if I have outperformed the benchmark, then make a cashflow deduction from my portfolio and try and replicate this in the benchmark (with the same dollar amount) then the benchmark return itself may go negative. Is there a common approach to this?

by Topdown at July 01, 2015 02:39 AM

Wes Felter

StackOverflow

Parallellize Independent Function Calls that Each Modify Function's Parent Environment

I'd like to find a way to parallelize repeated independent function calls in which each call modifies the function's parent environment. Each execution of the function is independent, however, for various reasons I am unable to consider any other implementation that doesn't rely on modifying the function's parent environment. See simplified example below. Is there a way to pass a copy of the parent environment to each node? I am running this on a linux system.

 create_fun <- function(){

        helper <- function(x, params) {x+params}
        helper2 <- function(z) {z+helper(z)}

        master <- function(y, a){
            parent <- parent.env(environment())
            formals(parent[['helper']])$params <- a
            helper2(y)}

       return(master)
}

# function to be called repeatedly
master <- create_fun()

# data to be iterated over
x <- expand.grid(1:100, 1:5)

# vector where output should be stored
results <- vector("numeric", nrow(x))

# task I'd like to parallelize
for(i in 1:nrow(x)){
    results[i] <- master(x[i,1], x[i, 2])
}

by k13 at July 01, 2015 02:20 AM

Slick MTable.getTables always fails with Unexpected exception[JdbcSQLException: Invalid value 7 for parameter columnIndex [90008-60]]

I have written this very simple code

object PersonDAO {
  val db = Database.forConfig("h2mem1")
  val people = TableQuery[People]

  def checkTable() : Boolean = {
    val action = MTable.getTables
    val future = db.run(action)
    val retVal = future map {result =>
      result map {x => x}
    }

    val x = Await.result(retVal, Duration.Inf)

    if (x.length > 0) {
      true
    } else {
      false
    }
  }
}

However this always fails with error message

play.api.UnexpectedException: Unexpected exception[JdbcSQLException: Invalid value 7 for parameter columnIndex [90008-60]]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(ApplicationProvider.scala:166) ~[play_2.10-2.3.4.jar:2.3.4]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1$$anonfun$1.apply(ApplicationProvider.scala:130) ~[play_2.10-2.3.4.jar:2.3.4]
    at scala.Option.map(Option.scala:145) ~[scala-library-2.10.5.jar:na]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1.apply(ApplicationProvider.scala:130) ~[play_2.10-2.3.4.jar:2.3.4]
    at play.core.ReloadableApplication$$anonfun$get$1$$anonfun$apply$1.apply(ApplicationProvider.scala:128) ~[play_2.10-2.3.4.jar:2.3.4]
Caused by: org.h2.jdbc.JdbcSQLException: Invalid value 7 for parameter columnIndex [90008-60]
    at org.h2.message.Message.getSQLException(Message.java:84) ~[h2-1.0.60.jar:1.0.60]
    at org.h2.message.Message.getSQLException(Message.java:88) ~[h2-1.0.60.jar:1.0.60]
    at org.h2.message.Message.getInvalidValueException(Message.java:117) ~[h2-1.0.60.jar:1.0.60]
    at org.h2.jdbc.JdbcResultSet.checkColumnIndex(JdbcResultSet.java:2857) ~[h2-1.0.60.jar:1.0.60]
    at org.h2.jdbc.JdbcResultSet.get(JdbcResultSet.java:2880) ~[h2-1.0.60.jar:1.0.60]
[success] Compiled in 22ms

by Knows Not Much at July 01, 2015 02:11 AM

/r/netsec

/r/netsec's Q3 2015 Information Security Hiring Thread

Overview

If you have open positions at your company for information security professionals and would like to hire from the /r/netsec user base, please leave a comment detailing any open job listings at your company.

We would also like to encourage you to post internship positions as well. Many of our readers are currently in school or are just finishing their education.

Please reserve top level comments for those posting open positions.

Rules & Guidelines
  • Include the company name in the post. If you want to be topsykret, go recruit elsewhere.
  • Include the geographic location of the position along with the availability of relocation assistance.
  • If you are a third party recruiter, you must disclose this in your posting.
  • Please be thorough and upfront with the position details.
  • Use of non-hr'd (realistic) requirements is encouraged.
  • While it's fine to link to the position on your companies website, provide the important details in the comment.
  • Mention if applicants should apply officially through HR, or directly through you.
  • Please clearly list citizenship, visa, and security clearance requirements.

You can see an example of acceptable posts by perusing past hiring threads.

Feedback

Feedback and suggestions are welcome, but please don't hijack this thread (use moderator mail instead.)

submitted by sanitybit
[link] [20 comments]

July 01, 2015 02:03 AM

/r/compsci

StackOverflow

How to use Scala Stack for a postfix arithmetic calculation?

I wrote an algorithm to get an infix statement change it to postfix and now I want to perform the calculation on that statement. I have seen this as a sample but I cannot understand what happens in some parts of it. It is using Scala Stack.

  1. How does this part work:

     case x :: y :: xs => xs ++ List(op(y, x))
    
  2. The types that I am using in my calculations are integer numbers and RDDs so in this example if I replace the "Float" with "Any" is that right?

by Rubbic at July 01, 2015 01:36 AM

Updating only non-None case class values in slick 3.0 update

I have a set of simple case classes, each of which have a number of properties that are Optional:

case class Person (name: Option[String], age: Option[Int], etc)

When all case class properties are provided (not-None) the slick update code works fine - I just use the case class instance in the update query.

The question is, there are many cases which any combination of properties might be None. I don't want to write a specific update query method for each combination.

How do I use a case class in an slick update query so slick only updates the non-None property values in the table and leaves the others intact (not trying to set them to null)?

by IUnknown at July 01, 2015 01:33 AM

arXiv Computer Science and Game Theory

Pure Strategies in Imperfect Information Stochastic Games. (arXiv:1506.09140v1 [cs.FL])

We consider imperfect information stochastic games where we require the players to use pure (i.e. non randomised) strategies. We consider reachability, safety, B\"uchi and co-B\"uchi objectives, and investigate the existence of almost-sure/positively winning strategies for the first player when the second player is perfectly informed or more informed than the first player. We obtain decidability results for positive reachability and almost-sure B\"uchi with optimal algorithms to decide existence of a pure winning strategy and to compute one if exists. We complete the picture by showing that positive safety is undecidable when restricting to pure strategies even if the second player is perfectly informed.

by <a href="http://arxiv.org/find/cs/1/au:+Carayol_A/0/1/0/all/0/1">Arnaud Carayol</a>, <a href="http://arxiv.org/find/cs/1/au:+Loding_C/0/1/0/all/0/1">Christof L&#xf6;ding</a>, <a href="http://arxiv.org/find/cs/1/au:+Serre_O/0/1/0/all/0/1">Olivier Serre</a> at July 01, 2015 01:30 AM

City Data Fusion: Sensor Data Fusion in the Internet of Things. (arXiv:1506.09118v1 [cs.CY])

Internet of Things (IoT) has gained substantial attention recently and play a significant role in smart city application deployments. A number of such smart city applications depend on sensor fusion capabilities in the cloud from diverse data sources. We introduce the concept of IoT and present in detail ten different parameters that govern our sensor data fusion evaluation framework. We then evaluate the current state-of-the art in sensor data fusion against our sensor data fusion framework. Our main goal is to examine and survey different sensor data fusion research efforts based on our evaluation framework. The major open research issues related to sensor data fusion are also presented.

by <a href="http://arxiv.org/find/cs/1/au:+Wang_M/0/1/0/all/0/1">Meisong Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Perera_C/0/1/0/all/0/1">Charith Perera</a>, <a href="http://arxiv.org/find/cs/1/au:+Jayaraman_P/0/1/0/all/0/1">Prem Prakash Jayaraman</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhang_M/0/1/0/all/0/1">Miranda Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Strazdins_P/0/1/0/all/0/1">Peter Strazdins</a>, <a href="http://arxiv.org/find/cs/1/au:+Ranjan_R/0/1/0/all/0/1">Rajiv Ranjan</a> at July 01, 2015 01:30 AM

Smart Small Cell for 5G: Theoretical Feasibility and Prototype Results. (arXiv:1506.09109v1 [cs.NI])

In this article, we present a real-time three dimensional (3D) hybrid beamforming for fifth generation (5G) wireless networks. One of the key concepts in 5G cellular systems is a small cell network, which settles the high mobile traffic demand and provides uniform user-experienced data rates. The overall capacity of the small cell network can be enhanced with the enabling technology of 3D hybrid beamforming. This study validates the feasibility of the 3D hybrid beamforming, mostly for link-level performances, through the implementation of a real-time testbed using a software-defined radio (SDR) platform and fabricated antenna array. Based on the measured data, we also investigate system-level performances to verify the gain of the proposed smart small cell system over long term evolution (LTE) systems by performing system-level simulations based on a 3D ray-tracing tool.

by <a href="http://arxiv.org/find/cs/1/au:+Jang_J/0/1/0/all/0/1">Jinyoung Jang</a>, <a href="http://arxiv.org/find/cs/1/au:+Chung_M/0/1/0/all/0/1">MinKeun Chung</a>, <a href="http://arxiv.org/find/cs/1/au:+Hwang_H/0/1/0/all/0/1">Hae Gwang Hwang</a>, <a href="http://arxiv.org/find/cs/1/au:+Lim_Y/0/1/0/all/0/1">Yeon-Geun Lim</a>, <a href="http://arxiv.org/find/cs/1/au:+Yoon_H/0/1/0/all/0/1">Hong-jib Yoon</a>, <a href="http://arxiv.org/find/cs/1/au:+Oh_T/0/1/0/all/0/1">TaeckKeun Oh</a>, <a href="http://arxiv.org/find/cs/1/au:+Min_B/0/1/0/all/0/1">Byung-Wook Min</a>, <a href="http://arxiv.org/find/cs/1/au:+Lee_Y/0/1/0/all/0/1">Yongshik Lee</a>, <a href="http://arxiv.org/find/cs/1/au:+Kim_K/0/1/0/all/0/1">Kwang Soon Kim</a>, <a href="http://arxiv.org/find/cs/1/au:+Chae_C/0/1/0/all/0/1">Chan-Byoung Chae</a>, <a href="http://arxiv.org/find/cs/1/au:+Kim_D/0/1/0/all/0/1">Dong Ku Kim</a> at July 01, 2015 01:30 AM

Privacy-preserving Publication of Mobility Data with High Utility. (arXiv:1506.09074v1 [cs.CR])

An increasing amount of mobility data is being collected every day by different means, e.g., by mobile phone operators. This data is sometimes published after the application of simple anonymization techniques, which might lead to severe privacy threats. We propose in this paper a new solution whose novelty is twofold. Firstly, we introduce an algorithm designed to hide places where a user stops during her journey (namely points of interest), by enforcing a constant speed along her trajectory. Secondly, we leverage places where users meet to take a chance to swap their trajectories and therefore confuse an attacker.

by <a href="http://arxiv.org/find/cs/1/au:+Primault_V/0/1/0/all/0/1">Vincent Primault</a>, <a href="http://arxiv.org/find/cs/1/au:+Mokhtar_S/0/1/0/all/0/1">Sonia Ben Mokhtar</a>, <a href="http://arxiv.org/find/cs/1/au:+Brunie_L/0/1/0/all/0/1">Lionel Brunie</a> at July 01, 2015 01:30 AM

The Potential of the Intel Xeon Phi for Supervised Deep Learning. (arXiv:1506.09067v1 [cs.DC])

Supervised learning of Convolutional Neural Networks (CNNs), also known as supervised Deep Learning, is a computationally demanding process. To find the most suitable parameters of a network for a given application, numerous training sessions are required. Therefore, reducing the training time per session is essential to fully utilize CNNs in practice. While numerous research groups have addressed the training of CNNs using GPUs, so far not much attention has been paid to the Intel Xeon Phi coprocessor. In this paper we investigate empirically and theoretically the potential of the Intel Xeon Phi for supervised learning of CNNs. We design and implement a parallelization scheme named CHAOS that exploits both the thread- and SIMD-parallelism of the coprocessor. Our approach is evaluated on the Intel Xeon Phi 7120P using the MNIST dataset of handwritten digits for various thread counts and CNN architectures. Results show a 103.5x speed up when training our large network for 15 epochs using 244 threads, compared to one thread on the coprocessor. Moreover, we develop a performance model and use it to assess our implementation and answer what-if questions.

by <a href="http://arxiv.org/find/cs/1/au:+Viebke_A/0/1/0/all/0/1">Andre Viebke</a>, <a href="http://arxiv.org/find/cs/1/au:+Pllana_S/0/1/0/all/0/1">Sabri Pllana</a> at July 01, 2015 01:30 AM

Architecture-Aware Configuration and Scheduling of Matrix Multiplication on Asymmetric Multicore Processors. (arXiv:1506.08988v1 [cs.PF])

Asymmetric multicore processors (AMPs) have recently emerged as an appealing technology for severely energy-constrained environments, especially in mobile appliances where heterogeneity in applications is mainstream. In addition, given the growing interest for low-power high performance computing, this type of architectures is also being investigated as a means to improve the throughput-per-Watt of complex scientific applications.

In this paper, we design and embed several architecture-aware optimizations into a multi-threaded general matrix multiplication (gemm), a key operation of the BLAS, in order to obtain a high performance implementation for ARM big.LITTLE AMPs. Our solution is based on the reference implementation of gemm in the BLIS library, and integrates a cache-aware configuration as well as asymmetric--static and dynamic scheduling strategies that carefully tune and distribute the operation's micro-kernels among the big and LITTLE cores of the target processor. The experimental results on a Samsung Exynos 5422, a system-on-chip with ARM Cortex-A15 and Cortex-A7 clusters that implements the big.LITTLE model, expose that our cache-aware versions of gemm with asymmetric scheduling attain important gains in performance with respect to its architecture-oblivious counterparts while exploiting all the resources of the AMP to deliver considerable energy efficiency.

by <a href="http://arxiv.org/find/cs/1/au:+Catalan_S/0/1/0/all/0/1">Sandra Catal&#xe1;n</a>, <a href="http://arxiv.org/find/cs/1/au:+Igual_F/0/1/0/all/0/1">Francisco D. Igual</a>, <a href="http://arxiv.org/find/cs/1/au:+Mayo_R/0/1/0/all/0/1">Rafael Mayo</a>, <a href="http://arxiv.org/find/cs/1/au:+Rodriguez_Sanchez_R/0/1/0/all/0/1">Rafael Rodr&#xed;guez-S&#xe1;nchez</a>, <a href="http://arxiv.org/find/cs/1/au:+Quintana_Orti_E/0/1/0/all/0/1">Enrique S. Quintana-Ort&#xed;</a> at July 01, 2015 01:30 AM

Quantum Markov chains: description of hybrid systems, decidability of equivalence, and model checking linear-time properties. (arXiv:1506.08982v1 [quant-ph])

In this paper, we study a model of quantum Markov chains that is a quantum analogue of Markov chains and is obtained by replacing probabilities in transition matrices with quantum operations. We show that this model is very suited to describe hybrid systems that consist of a quantum component and a classical one, although it has the same expressive power as another quantum Markov model proposed in the literature.

Indeed, hybrid systems are often encountered in quantum information processing; for example, both quantum programs and quantum protocols can be regarded as hybrid systems. Thus, we further propose a model called hybrid quantum automata (HQA) that can be used to describe these hybrid systems that receive inputs (actions) from the outer world. We show the language equivalence problem of HQA is decidable in polynomial time. Furthermore, we apply this result to the trace equivalence problem of quantum Markov chains, and thus it is also decidable in polynomial time. Finally, we discuss model checking linear-time properties of quantum Markov chains, and show the quantitative analysis of regular safety properties can be addressed successfully.

by <a href="http://arxiv.org/find/quant-ph/1/au:+Li_L/0/1/0/all/0/1">Lvzhou Li</a>, <a href="http://arxiv.org/find/quant-ph/1/au:+Feng_Y/0/1/0/all/0/1">Yuan Feng</a> at July 01, 2015 01:30 AM

Big Data Technology Literature Review. (arXiv:1506.08978v1 [cs.DC])

A short overview of various algorithms and technologies that are helpful for big data storage and manipulation. Includes pointers to papers for further reading, and, where applicable, pointers to open source projects implementing a described storage type.

by <a href="http://arxiv.org/find/cs/1/au:+Bar_Sinai_M/0/1/0/all/0/1">Michael Bar-Sinai</a> at July 01, 2015 01:30 AM