Planet Primates

March 28, 2015

DataTau

StackOverflow

ansibel vsphere_guest error

i try to auto create some VM i can say the CODE was working as it was ... i was able to create 20 vm on the flay ! now i upgrade my ansible from ansible 1.8.4 configured module search path = None TO

ansible 1.9.0.1 configured module search path = None

as i upgrade my pysphere from 0.1.7 to 0.1.8 (pysphere-0.1.8-py2.7.egg-info) the only reason for the upgrade is relocate() function in 0.1.8

so now i try to run my ansible code and i get this ERROR

failed: [host1] => {"failed": true} msg: unsupported parameter for module: username

tasks: - debug: var=vm - name: Gather VM facts vsphere_guest: vcenter_hostname: vcenter name password: pass username: user guest: ansible vmware_guest_facts: yes

REMOTE_MODULE vsphere_guest vcenter_hostname=XXXXXXXXX guest=ansible password=VALUE_HIDDEN username= <------ MISSING !!!! why ??

i install new server and go back to old ver same issue so .... how can i get more detail log ...

what i am missing ....

Thanks Noam

by Noam Greenberg at March 28, 2015 10:53 PM

Something built-in in clojure for --> call an impure function to each element in a sequence?

I was wondering if clojure has something built-in for the following code. I know I can do (map (fn [x] (f x)) coll) and then evaluate the sequence as done here. I don't want to do that.

(defn apply-to-all [f coll]
  (f (first coll))
  (if (= (count (rest coll)) 0) 
    nil
    (apply-to-all f (rest coll))))

"example usage"
(apply-to-all println [0 1 2])

by rfmind at March 28, 2015 10:47 PM

/r/compsci

Help with two's complement.

I'm trying to understand the concept of two's complement.

I understand that the conversion process involves flipping bits and adding 1 but I'm not quite sure how to implement that. Can I go backwards?

I have these two questions from a previous exam paper for my course and I'm not quite sure where the answers come from:

Q. In 2’s Complement what does the number 1010 0011 represent ?

A. -103(decimal)

Q. In 2’s Complement what does the number 0111 1001 represent ?

A. 121 (decimal)

The second answer is the same as the binary number converted to decimal, but I'm not sure if that's the right approach all the time or if that just happens to be the case here.

Any help would be appreciated.

submitted by flawyer
[link] [comment]

March 28, 2015 10:43 PM

CompsciOverflow

Algorithm to compute a recursive function on a given set

I am working on a property of a given set of natural numbers and it seems difficult to compute. There is a function 'fun' which takes two inputs, one is the cardinal value and another is the set. If the set is empty then fun should return 0 because fun depends on the product of the set and fun on all subsets of the complement set.

For clarification here is an example:

S is a set given S={1,2,3,4}. The function fun(2,S) is defined as

fun(2,S)=prod({1,2})*[fun(1,{3}) + fun(1,{4}) + fun(2,{3,4})] + 
         prod({1,3})*[fun(1,{2}) + fun(1,{4}) + fun(2,{2,4})] + 
         prod({1,4})*[fun(1,{3}) + fun(1,{2}) + fun(2,{2,3})] +
         prod({2,3})*[fun(1,{4}) + fun(1,{1}) + fun(2,{1,4})] +
         prod({2,4})*[fun(1,{1}) + fun(1,{3}) + fun(2,{3,1})] +
         prod({3,4})*[fun(1,{1}) + fun(1,{2}) + fun(2,{1,2})]

prod is defined as the product of all elements in a set, for example

prod({1,2})=2; 
prod({3,2})=6; 

I am trying to write the pseudo code for this problem using recursive method but it's not working. The base case is the cardinal value should be more than zero that means there should be at least one element in the set other wise prod will be zero and fun will return zero.

Pseudo code:

fun(i,S)
if |S|=1 && i!=0
   return prod(S)
else if i==0
   return 0     
else
  prod(subset s', s' is a subset of S and |s'|=i)*(sum over fun((for i=1 to m),{S-s'}), m=|S-s'|) //I don't know how to write code for this part and need help.
end if
end fun 

prod(s)
n=|s|
temp=1
for i=1 to n
    temp *=s(i) //s(1) is the 1st element of set s
end for
return temp
end prod

by precision at March 28, 2015 10:23 PM

QuantOverflow

Question about historical volatility ranking

I have seen this strategy example, which uses garch in a regime switching context:

https://systematicinvestor.wordpress.com/2012/01/06/trading-using-garch-volatility-forecast/

The author classifies volatility by percentile using a 252 day look back period. Volatility is defined as the standard deviation of the past 21 log returns. So far, so good.

However, the way the author ranks volatility is strange to me. Instead of simply taking the percentile rank of the current day counting the past 252, he does this, in R:

vol.rank = percent.rank(SMA(percent.rank(hist.vol, 252), 21), 250)

So, assuming hist.vol is a vector of historical volatilities, he first assigns to each day its percentile rank according to the past 252 days. That should be enough to me, but then he proceeds to take the simple moving average of the percentile ranks, and then again classify each of these SMAs of percentile ranks into their own percentile ranks.

What is the rationale in doing that?

by Chicoscience at March 28, 2015 10:23 PM

CompsciOverflow

What's the difference between declarative syntax and encapsulation?

I had been first introduced to the idea of declarative syntax when using Angular JS. From what I understand, the idea is that you say, "do this" instead of "in order to do this, do this, this and this". You don't care how it's done, you just want it done, and you pass it off to some lower level of abstraction to do.

Now I'm going over the idea of encapsulation as I learn Java, and the idea seems very similar. From what I understand, the idea is that you break things up into modules, and you define an outwards-facing API for people to use. So people could use your in a declarative manner to say, "do this".

Is this true? If so, what is the real difference between declarative syntax and encapsulation? Just that one describes syntax and the other describes the more abstract design philosophy?

Edit: I think that my question boils down to: what's the difference between making a declarative statement and making an API call?

by Adam Zerner at March 28, 2015 10:20 PM

I need to make a game. I am given the game board i just don't know how to go about doing it

this is the code for the game board. - is for walkable spaces, H is for the hero, # is for the walls, M is for monsters, G is for gold, and % is the stairs to reach the next level.

        "--------------------",
        "--------------------",
        "----H---------------",
        "--------------------",
        "--------------------",
        "------#######-------",
        "------#-------------",
        "------#-------------",
        "------#-----M-------",
        "------#-------------",
        "-----M--------------",
        "-G----M-------------",
        "--------------------",
        "--------------------",
        "--------------------",
        "------#-------------",
        "---G--#----------G--",
        "------###-----------",
        "-------------M------",
        "----------------G--%"};

by Henry Jenkins at March 28, 2015 10:19 PM

StackOverflow

Play Framework Project: How to include plugin from source

Background: I am in the process of integrating TypeScript into a Play Framework (2.2.6) and I am trying to use mumoshu's plugin to do so. Problem is, the plugin has problems when running "play dist" on a windows machine. I've forked the code from GitHub in order to make some modifications to the source so I can continue using the plugin.

Question: I have a play framework plugin in the traditional source structure:

project/build.properties
project/Build.scala
project/plugins.sbt
src/main/scala/TypeScriptPlugin
src/main/scala/TypeScriptKeys.scala
...<other code>

I'd like to include this plugin into another project but I don't really know where to start and how to hookup the settings.

From previous suggestions, I've been able to add the module to my project as follows:

// In project/Build.scala...
object ApplicationBuild extends Build{
    lazy val typeScriptModule = ProjectRef(file("../../../play2-typescript"), "play2-typescript")

    lazy val main = play.Project(<appName>, <appVersion>, <appDependencies>).settings(typescriptSettings: _*).dependsOn(typeScriptModule).aggregate(typeScriptModule)
}

Where typescriptSettings is defined in the other project... I think, I'm still not 100% sure what typescriptSettings IS other than adding this settings call enabled the plugin to work. This worked fine originally when I had included the plugin in the plugins.sbt file and imported the package com.github.mumoshu.play2.typescript.TypeScriptPlugin._ but now that I'm including the source and explicitly including the module, I can't just import the package... Or at least not the way I used to.

I am still new to scala/sbt and I am having difficulty finding helpful resources online. Any help would be appreciated.

by Paul Sachs at March 28, 2015 10:14 PM

/r/emacs

Local Buffer Tabs

I like to use spaces for indentation, but I'm working on a project that uses tabs.

What's the right way to set some buffers to use tabs instead of spaces? I figure there's some way to do this in projectile, but I'm not sure how.

submitted by nautola
[link] [comment]

March 28, 2015 10:09 PM

CompsciOverflow

LL(1) grammar for the untyped lambda-calculus

What I want to do

I am trying to define a LL(1) grammar of the lambda-calculus.

What I did

Here is the grammar:

  1. $Term \to Abs$
  2. $Term \to App$
  3. $Abs \to \lambda \ id \ . \ Term$
  4. $App \to Var \ AppSeq$
  5. $AppSeq \to App$
  6. $AppSeq \to \epsilon$
  7. $Var \to id$
  8. $Var \to (\ Term \ )$

Here are the FIRST sets:

  • $FIRST(Term) = \{ \lambda, id, ( \}$
  • $FIRST(Abs) = \{ \lambda \}$
  • $FIRST(App) = \{ id, ( \}$
  • $FIRST(AppSeq) = \{ id, (, \epsilon \}$
  • $FIRST(Var) = \{ id, ( \}$

Here are FOLLOW sets:

  • $FOLLOW(Term) = \{ \$, ) \}$
  • $FOLLOW(Abs) = \{ \$, ) \}$
  • $FOLLOW(App) = \{ \$, ) \}$
  • $FOLLOW(AppSeq) = \{ \$, ) \}$
  • $FOLLOW(Var) = \{ \$, (, id \}$

The dragon book give the following definition:

A grammar G is LL(1) if and only if whenever A → α | β are two distinct productions of G, the following conditions hold:

  1. FIRST(α) and FIRST(β) are disjoint sets
  2. if ε is in FIRST(β), then FIRST(α) and FOLLOW(A) are disjoint sets
  3. likewise if ε is in FIRST(α)

Question

  1. Are my FIRST and FOLLOW sets correct?
  2. If no, how can I make it LL(1)?

by authchir at March 28, 2015 10:08 PM

The algorithm yields optimal ternary codes

Steps to build Huffman Tree Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.

  1. Create a leaf node for each unique character and build a min heap of all leaf nodes (Min Heap is used as a priority queue. The value of frequency field is used to compare two nodes in min heap. Initially, the least frequent character is at root)

  2. Extract two nodes with the minimum frequency from the min heap.

  3. Create a new internal node with frequency equal to the sum of the two nodes frequencies. Make the first extracted node as its left child and the other extracted node as its right child. Add this node to the min heap.

  4. Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the root node and the tree is complete.

If we would like to generalize the Huffman algorithm for coded words in ternary system (i.e. coded words using the symbols 0 , 1 and 2 ) I think that we could do it as follows.

Steps to build Huffman Tree Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.

  1. Create a leaf node for each unique character and build a min heap of all leaf nodes

  2. Extract three nodes with the minimum frequency from the min heap.

  3. Create a new internal node with frequency equal to the sum of the three nodes frequencies. Make the first extracted node as its left child, the second extracted node as its middle child and the third extracted node as its right child. Add this node to the min heap.

  4. Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the root node and the tree is complete.

How can we prove that the algorithm yields optimal ternary codes?

by evinda at March 28, 2015 09:58 PM

QuantOverflow

Portfolio optimzation : efficient frontier with respect to risk aversion parameter with R

I am currently trying to write a little script in R to determine the optimal weights given a fixed risk aversion parameter. The problem I have is that by increasing the risk aversion parameter I think that parabola should become wider. However, in my case it yields the same parabola with minor differences. The problem I am encountering is that I am not sure if the way I formulated the solve.qp is correct or not My idea was to minimize -w^tmu+aversion/2 w^tSigma*w; where w = weights, mu = mean of returns, Sigma= covariance of returns. I did try using portfolio.optim from the tseries package, but it does not allow me to fix the risk aversion parameter or does it ?

You may find my code below.

    efficientPortfolio <- function(er,cov.mat,aversion=3,target.return){

    Dmat<- cov.mat*aversion
    dvec<- as.vector(er)
    ones<- rep(1,length(er))
    zeros <-rep(0,length(er))
    Identity <- matrix(0,length(er),length(er))
    diag(Identity)<-1
    Amat<- (cbind((ones),colMeans(data),Identity)) #no short sell

    bvec<- c(1,target.return,zeros) #no short sell

    sol<-solve.QP(Dmat,dvec,Amat,bvec=bvec,meq=1)
    weights<-sol$solution


    exp.ret <- crossprod(dvec,weights)
    std.dev <- sqrt(weights %*% cov.mat %*% weights)

    ret <- list("call" = call,
                "er" = as.vector(exp.ret),
                "sd" = as.vector(std.dev),
                "weights" = weights) 
    ret
    }



    efficientFrontier <- function(er,cov.mat, nport = 20,aversion=3) {
    ef <- matrix(0, nport, 2 + length(er))

    pm <- seq((min(er)), (max(er)), 
              length.out = nport)

    for (i in 1:nport) {

            port <-         efficientPortfolio(er,cov.mat,aversion=aversion,target.ret=pm[i])

            ef[i, 1] <- port$sd
                ef[i, 2] <- port$er
            ef[i, 3:ncol(ef)] <- port$weights

     }
    list(ef = ef[, 1:2], weights = ef[, -1:-2])
     }

also
er and cov.mat in my case are :

   er<-c(2.626568e-05, 6.542483e-06, 2.169358e-05, 1.510713e-04 ,7.113906e-05 )
   cov.mat<-matrix(c(4.408143e-04 ,   1.487359e-05, 1.680034e-05,   3.066478e-04,   2.069675e-04,
1.487359e-05,   7.270984e-03,   9.134981e-06,   -1.699836e-05,  7.956140e-06,
1.680034e-05,   9.134981e-06,   6.528898e-04,   5.397248e-06,   9.887768e-06,
3.066478e-04,   -1.699836e-05,  5.397248e-06,   1.473456e-03,   6.657914e-04,
2.069675e-04,   7.956140e-06,   9.887768e-06,   6.657914e-04,   5.090455e-04),5,5)

and the package I am using is quadprog Thanks in advance. Jc

by Jcl at March 28, 2015 09:55 PM

CompsciOverflow

Tallest Person Average Memory Updating?

We ran into a problem that was mentioned in an interview 2 days ago. Can you help us with any idea or hint?

A sequence of $n$ people, $\langle\,p_1,p_2,\dotsc p_n\,\rangle$ enter a room. We want to find the index, $i$, of the tallest person in the room. We have one variable for saving that index, which is updated when we see a person who is taller than the maximum height so far. We want to calculate the average number of times our variable is updated. For simplicity, assume that $p$ is a permutation of $\{1, 2, \dotsc, n\}$

Short answer: it is close to $\ln n$. (i.e: natural logarithm)

How can one solve such a question?

by Anjela Minoeu at March 28, 2015 09:50 PM

TheoryOverflow

convertion into integer linear program for Ising spin state problem

I am trying to model the Ising spin state problem into Integer linear program and find the optimal ground state using lp_solve. (This is just a miniature version of Ising state problem)

$$ maximise: \sum J_{ij}S_{i}S_{j} $$ $$ -1\leq J_{ij} \leq 1 $$ $$ S_{i},S_{j} \epsilon (-1,1) $$ Value of $J_{ij}$ is given. The goal is to find optimal values of $S_{i}$ to maximise the value. For ex: $J_{12}=1, J_{13}=-1, J_{23}=-1$. One of the solutions for maximum energy is 3 with $S_{1}=1, S_{2}=1, S_{3}=-1$.

I am finding it difficult to convert this into integer linear program.

This is my initial approach for the conversion. I tried to take an aditional variable $X_{i}$ and convert this program as $$ maximise: \sum X_{i} $$ $ if((S_{i}=-1 or S_{j}=-1) and J_{ij}=-1) \implies X_{i}=1 $ $ if((S_{i}=-1 or S_{j}=-1) and J_{ij}=1) \implies X_{i}=-1 $ $ if((S_{i}=-1 and S_{j}=-1) and J_{ij}=-1) \implies X_{i}=-1 $ $ if((S_{i}=-1 and S_{j}=-1) and J_{ij}=1) \implies X_{i}=1 $

I dont know if this approach is correct or not. Even i dont know how to convert this to linear program. Any suggestion or help is greatly appreciated.

by suhastheju at March 28, 2015 09:50 PM

StackOverflow

Scala: How to mask the first N characters of a string

Given a string that represents a credit card number...

val creditCardNo = "1111222233334444"

... how do I mask the first 12 characters with *?

val maskedCreditCardNo = "************4444"

by j3d at March 28, 2015 09:43 PM

Running Spark Application from Eclipse

I am trying to develop a spark application on Eclipse, and then debug it by stepping through it.

I downloaded the Spark source code and I have added some of the spark sub projects(such as spark-core) to Eclipse. Now, I am trying to develop a spark application using Eclipse. I have already installed the ScalaIDE on Eclipse. I created a simple application based on the example given in the Spark website.

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object SimpleApp {
  def main(args: Array[String]) {
    val logFile = "YOUR_SPARK_HOME/README.md" // Should be some file on your system
    val conf = new SparkConf().setAppName("Simple Application")
    val sc = new SparkContext(conf)
    val logData = sc.textFile(logFile, 2).cache()
    val numAs = logData.filter(line => line.contains("a")).count()
    val numBs = logData.filter(line => line.contains("b")).count()
    println("Lines with a: %s, Lines with b: %s".format(numAs, numBs))
  }
}

To my project, I added the spark-core project as a dependent project(right click -> build path -> add project). Now, I am trying to build my application and run it. However, my project shows that it has errors, but I don't see any errors listed in the problems view within Eclipse, nor do I see any lines highlighted in red. So, I am not sure what the problem is. My assumption is that I need to add external jars to my project, but I am not sure what these jars would be. The error is caused by val conf = new SparkConf().setAppName("Simple Application") and the subsequent lines. I tried removing those lines, and the error went away. I would appreciate any help and guidance, thanks!

by AndroidDev93 at March 28, 2015 09:37 PM

QuantOverflow

Effective & Maturity Date Modified Following

I am constructing discount curve for tenor 1 month.

First Instrument - PLN_1M_WIBOR has Effective Date on 2015-01-29 (spot). I was wondering what Maturity Date should be? 2015-02-27 or 2015-03-02? I am using modified following convention. According to to this convention I suppose it should be 2015-02-27, but I am not sure.

Second instrument is FRA_0102 dated on today (2015-01-27), so it`s Effective Date should be 2015-02-27?

by Mike M at March 28, 2015 09:23 PM

CompsciOverflow

Can each VLIW sub-instruction execute any instruction?

say that you have a 128 bit (32*4) VLIW word. Can each 32 bit sub-word contain any operation (ADD,CALL,BRANCH,...) if there are no hazards or can each sub-word only specify one functional unit (so if the BRANCH functional unit doesn't get used, that sub-word becomes a NOP)?

by gilianzz at March 28, 2015 09:22 PM

Undecidable (logic) Post correspondence problem instance

Since the post correspondence problem is undecidable, there exists an instance of this problem such that we can neither prove that the instance is positive nor prove that the instance is negative. Otherwise we could just use an algorithm which enumerates the proofs.

It seems to me that we can explicitly find such an instance but I can't find any. What do you think about this?

For finding such an instance, I try with a semi-decision procedure and, after some time, if this doesn't work, I try to prove that the instance is negative. But such an instance exists since the problem is not computable.

It seems to me that it can't exist a trivial algorithm for each specific instance, otherwise we could use an algorithm which enumerates the proofs (if there is an algorithm there is a proof) and it works for all instances.

by user30102 at March 28, 2015 09:08 PM

Lobsters

StackOverflow

Coq: Prop versus Set in Type(n)

I want to consider the following three (related?) Coq definitions.

Inductive nat1: Prop :=
  | z1 : nat1
  | s1 : nat1 -> nat1.

Inductive nat2 : Set := 
  | z2 : nat2
  | s2 : nat2 -> nat2.

Inductive nat3 : Type :=
  | z3 : nat3
  | s3 : nat3 -> nat3.

All three types give induction principles to prove a proposition holds.

nat1_ind
     : forall P : Prop, P -> (nat1 -> P -> P) -> nat1 -> P

nat2_ind
     : forall P : nat2 -> Prop,
       P z2 -> (forall n : nat2, P n -> P (s2 n)) -> forall n : nat2, P n

nat3_ind
     : forall P : nat3 -> Prop,
       P z3 -> (forall n : nat3, P n -> P (s3 n)) -> forall n : nat3, P n

The set and type versions also contain induction principles for definitions over set and type (rec and rect respectively). This is the extent of my knowledge about the difference between Prop and Set; Prop has a weaker induction.

I have also read that Prop is impredicative while Set is predicative, but this seems like a property rather than a defining quality.

While some practical (moral?) differences between Set and Prop are clear, the exact, defining differences between Set and Prop, as well as where they fit into the universe of types is unclear (running check on Prop and Set gives Type (* (Set) + 1*)), and I'm not exactly sure how to interpret this...

by Jonathan Gallagher at March 28, 2015 08:58 PM

Can Scala call by reference?

I know that Scala supports call-by-name from ALGOL, and I think I understand what that means, but can Scala do call-by-reference like C#, VB.NET, and C++ can? I know that Java cannot do call-by-reference, but I'm unsure if this limitation is solely due to the language or also the JVM.

This would be useful when you want to pass an enormous data structure to a method, but you don't want to make a copy of it. Call-by-reference seems perfect in this case.

by Slack at March 28, 2015 08:56 PM

Seq, SeqLike, GenSeq or GenSeqLike?

When a create a function, should I have it take as an argument Seq, SeqLike, GenSeq, or GenSeqLike? (So many choices!)

My only requirements is that I can map over it and produce a collection with the same number and order of arguments as before.

Typically I "program to interfaces" and choose the most general type possible. In this case, that would be a GenSeqLike.

Is this correct/idiomatic?

by Paul Draper at March 28, 2015 08:44 PM

lein ring auto-refresh overwrites request body?

I wrote simple client-server app referring to "ClojureScript: Up and Running".

https://github.com/phaendal/clojure-simple-client-server

As shown in below server-code, /text prints request and body to console and returns body from (slurp (:body req)).

But if :auto-refresh? is set to true in project.clj, (slurp (:body req)) will returns empty string instead of sent value.

Why it returns empty? and How to get request-body with auto-refresh?

(ns client-server.server
  (:gen-class)
  (:require [compojure.route :as route]
            [compojure.core :as compojure]
            [ring.util.response :as response]))

(defn simple-print [req]
  (let [body (slurp (:body req) :encoding "utf-8")]
    (println req)
    (println (str "slurped: " body))
    body))

(compojure/defroutes app
  (compojure/POST "/text" request (simple-print request))
  (compojure/GET "/" request
                 (-> "public/index.html"
                     (response/resource-response)
                     (response/content-type "text/html")
                     (response/charset "utf-8")))
  (route/resources "/"))

by phaendal at March 28, 2015 08:34 PM

What do these Coq functions mean?

I am going through the code Q_denumerable.v in library QArithSternBrocot and this is what I came across.

Fixpoint positive_to_Qpositive_i (p:positive) : Qpositive := 
   match p with 
   | xI p => nR (positive_to_Qpositive_i p)
   | xO p => dL (positive_to_Qpositive_i p)
   | xH => One
   end. 

What do nR and dL mean?

by Sai Ganesh Muthuraman at March 28, 2015 08:32 PM

/r/compsci

The Count Min Sketch Data Structure, Always Increasing?

Count Min Sketch paper - http://dimacs.rutgers.edu/~graham/pubs/papers/cm-full.pdf

Wikipedia page - http://en.wikipedia.org/wiki/Count%E2%80%93min_sketch

I'm working on implementing a count-min-sketch to populate a queue of heavy hitters for uncategorized domain names that our system encounters, and from my understanding it seems like this is one of the best approaches to doing that these days.

I'm having trouble understanding how this would work in an online or continuously streaming mode. The values in the table are continuously growing and not tug-of-war'd like the hash algorithms in the original countsketch do.

Is the goal just to choose large enough row/col, such that error factor and probility are very low? How would that work with such a large key space like every .com domain name?

And how could this datastructure or a closely kinned one be augmented to find "trending topics" or rapidly rising keys?

Thanks,

submitted by ramblinpeck
[link] [comment]

March 28, 2015 08:30 PM

StackOverflow

Checking Clojure pre-conditions without running the function?

I have one function that does some (possibly lengthy) work (defn workwork [x] ...) and some other functions to check if the call will succeed ahead of time (defn workwork-precondition-1 [x] ...).

The precondition functions should be evaluated every time workwork is called (e.g. using :pre). The precondition functions should also be collected (and:ed) in a single function and made available to client code directly (e.g. to disable a button).

Which is the idiomatic way to solve this in Clojure while avoiding code duplication?

In particular, is there any way to evaluate the pre-conditions of a function without running the function body?

by 4ZM at March 28, 2015 08:28 PM

QuantOverflow

Forward Curves and Par Yield Curves

I'm recently reading a research paper on the yield curve by Salomon brothers and in it it states that when the forward curve is above the par yield curve, it is seen as cheaper. If for example, the years 9-12 of the forward rate curve lie above the par yield curve with the forward 12 year rate above the 9 year rate as well, it is recommended to buy the 12 year bond while selling the 9 year bond.

Unfortunately, I am unable to accurately grasp the concept behind this in relation to the par yield curve. Please help! Thank you!

by Timothy Ng at March 28, 2015 08:23 PM

/r/compsci

Is this problem NP-complete?

We are given a graph G = (V, E) and an integers c and k, and need to find a subgraph H = (V', E') in which E' is a subset of E, and every vertex in V' has in-degree + out-degree exactly c, and whether V' has <= k connected components. I want to know whether or not this problem is NP-complete.

What I tried: I tried to do an easier version of the problem, for c = 2 (i.e., the total degree is 2). In this case, I figured that would pose as a good candidate to show 3SAT reduces to this problem (both possible values assigned to a literal). However, I'm not sure how to construct such a graph given clauses Ci = (x_i1 or x_i2 or x_i3) for all i. I would think that each literal would represent a vertex in the graph, but I don't know how to show a proper reduction.

Also, I cannot find any other NP-complete problem involving a graph regarding fewer than k connected components (this is not hard to do, but along with some other criterion).

Also, what about the other direction (i.e., doing the same for >= k connected components)? If I were to show the previous problem is NP-complete, then unless NP = coNP, this other problem is not NP-complete either (also is not in P for obvious reasons).

submitted by coaster367
[link] [comment]

March 28, 2015 08:22 PM

StackOverflow

Trying to install SBT-0.13.8 for Windows 7 installs SBT version 0.12.4

I have (multiple times) tried to install SBT-0.13.8 from the SBT download page via the SBT-0.13.8-MSI button and I always end up getting an SBT version which shows the following output

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Loading project definition from C:\Users\Tina\Desktop\example\project\project
error: error while loading CharSequence, class file 'C:\Program Files\Java\jre1.
8.0_20\lib\rt.jar(java/lang/CharSequence.class)' is broken (bad constant pool tag 15 at byte 1470)
[error] Type error in expression
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? i
[warn] Ignoring load failure: no project loaded.
> about
[info] This is sbt 0.12.4
[info] No project is currently loaded
[info] sbt, sbt plugins, and build definitions are using Scala 2.9.2

SBT produces an error message and (!) shows itself as version 0.12.4.!!!

I really made sure that I have no other version of SBT installed and I even made a reboot before re-installing it but nothing changed. SBT files have a creation date of March, 21, 2015. This seems to be the newest version but why does this version show up as 0.12.4 and does not work with JDK1.8?

by Tina Hildebrandt at March 28, 2015 08:22 PM

QuantOverflow

How do I show that there is no tangency portfolio?

Question: Suppose that the risk-free return is equal to the expected return of the global minimum variance portfolio. Show that there is no tangency portfolio.

A hint for the question states: Show there is no δ and λ satisfying

δ∑^(-1)(µ-(R_f)1)= λπ_µ + (1-λ)π_1

but I'm not sure what to make of it. Any help is appreciated.

by user2034 at March 28, 2015 08:17 PM

StackOverflow

Typed primitives in Scala 2.11

As I see, primitive types like String and Long cannot be extended as they are defined as final. But this is a pity for a type-safe approach. In code that does not revolve around e.g. string manipulation, I prefer to type my data rather than use String, Long, Int, etc: as long as I'm using a type safe language, I'd like my code to really be typed from the ground up.

As per experimentation and as demonstrated on a very old question type aliases do not seem to facilitate this. Currently, I will use things like:

case class DomainType(value: String)

at the cost of having to use .value where the value is needed.

Is there any other language feature been introduced after scala 2.8 or otherwise, that can neatly facilitate type safe sub-typed primitives? are there any object overrides that proxy the underlying value, but still let type matching occur?

by matt at March 28, 2015 08:16 PM

UnixOverflow

Porting Linux date parsing to FreeBSD

I have date in this format: date -d $datum +"%Y-%m-%d" and on Linux it worked OK but in FreeBSD says this:

ERROR wrong format
usage: date [-jnRu] [-d dst] [-r seconds] [-t west] [-v[+|-]val[ymwdHMS]] ... 
            [-f fmt date | [[[[[cc]yy]mm]dd]HH]MM[.ss]] [+format]

what is response to this:

date -d $datum +"%Y-%m-%d" >/dev/null 2>&1 || echo "ERROR wrong format" 

But it prints out an error and then continues with the code and it seems that correctly. What am I supposed to do, so it wouldn't print out error and if there is error then the program exits?

by applenic at March 28, 2015 08:15 PM

CompsciOverflow

Poly-time reduction: D and D Comp

Looking at the IndependentSet problem and it's complement. I want to show that IS is poly-time reducible to it's complement, however I am struggling on coming up with the reduction function. I will define it's complement for further clarity, does every subset s.t it's size is ≥ k in V contain at least one edge between its vertices? My intuition is below.

           f(G, k) = (G, i-k) where i = |V|? 

by Teodorico Levoff at March 28, 2015 08:12 PM

TheoryOverflow

The algorithm yields optimal ternary codes

Steps to build Huffman Tree Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.

  1. Create a leaf node for each unique character and build a min heap of all leaf nodes (Min Heap is used as a priority queue. The value of frequency field is used to compare two nodes in min heap. Initially, the least frequent character is at root)

  2. Extract two nodes with the minimum frequency from the min heap.

  3. Create a new internal node with frequency equal to the sum of the two nodes frequencies. Make the first extracted node as its left child and the other extracted node as its right child. Add this node to the min heap.

  4. Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the root node and the tree is complete.

If we would like to generalize the Huffman algorithm for coded words in ternary system (i.e. coded words using the symbols 0 , 1 and 2 ), I think that it would be as follows.

Steps to build Huffman Tree Input is array of unique characters along with their frequency of occurrences and output is Huffman Tree.

  1. Create a leaf node for each unique character and build a min heap of all leaf nodes

  2. Extract three nodes with the minimum frequency from the min heap.

  3. Create a new internal node with frequency equal to the sum of the three nodes frequencies. Make the first extracted node as its left child, the second extracted node as its middle child and the third extracted node as its right child. Add this node to the min heap.

  4. Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the root node and the tree is complete.

Am I right? How can we prove that the algorithm yields optimal ternary codes?

by evinda at March 28, 2015 08:03 PM

StackOverflow

map with lambda vs map with function - how to pass more than one variable to function?

I wanted to learn about using map in python and a google search brought me to http://www.bogotobogo.com/python/python_fncs_map_filter_reduce.php which I have found helpful.

One of the codes on that page uses a for loop and puts map within that for loop in an interesting way, and the list used within the map function actually takes a list of 2 functions. Here is the code:

def square(x): 
    return (x**2)

def cube(x):
    return (x**3)

funcs = [square, cube]

for r in range(5):
    value = map(lambda x: x(r), funcs)
    print value

output:

[0, 0]
[1, 1]
[4, 8]
[9, 27]
[16, 64]

So, at this point in that tutorial, I thought "well if you can write that code with a function on the fly (lambda), then it could be written using a standard function using def". So I changed the code to this:

def square(x): 
    return (x**2)

def cube(x):
    return (x**3)

def test(x):
    return x(r)

funcs = [square, cube]

for r in range(5):
    value = map(test, funcs)
    print value

I got the same output as the first piece of code, but it bothered me that variable r was taken from the global namespace and that the code is not tight functional programming. And there is where I got tripped up. Here is my code:

def square(x): 
    return (x**2)

def cube(x):
    return (x**3)

def power(x):
    return x(r)

def main():
    funcs = [square, cube]
    for r in range(5):
        value = map(power, funcs)
        print value

if __name__ == "__main__":
    main()

I have played around with this code, but the issue is with passing into the function def power(x). I have tried numerous ways of trying to pass into this function, but lambda has the ability to automatically assign x variable to each iteration of the list funcs.

Is there a way to do this by using a standard def function, or is it not possible and only lambda can be used? Since I am learning python and this is my first language, I am trying to understand what's going on here.

by Darren at March 28, 2015 07:53 PM

Persist scala collection with jpa fails for large numbers

I am trying to persist a large amount of entities (> 100 ) however strangely just the first 10 entities are saved to the db . without any exception or error what can cause such a behavior (using this scala code)?

def persistSingleEvents(id:String , e: Seq[EventMembers]) : Event ={
val persistEvent = new Events()
//setting values 
persistEvent
}
val groupedEvents = events.groupBy(_.id)
groupedEvents foreach { case (name, eventMembers) =>
    eventsRepository.save(persistSingleEvent(name,eventMembers))
  }

by igx at March 28, 2015 07:46 PM

CompsciOverflow

Compression functions are only practical because "The bit strings which occur in practice are far from random"?

I would have made a comment, as this pertains to Andrej Bauer's answer in this thread; however, I believe it is worth a question.

Andrej explains that given the set of all bit strings of length 3 or less, a lossless compression function can only "compress" some of them. The others, for instance "01" would have to actually be compressed to a string such as "0001", with length 4. The compression ratio is simply the average compression across the input set.

This makes lossless compression seem impractical, but the important quote is this:

The bit strings which occur in practice are far from random and exhibit a lot of regularity.

I have a hard time believing that, for instance, multimedia files are represented by anything other than random bit strings. Is there truly a pattern that compression functions leverage to make the algorithm useful in reality?

by AlexMayle at March 28, 2015 07:45 PM

StackOverflow

How can I solve this type mismatch in Scala?

 def balance(chars: List[Char]): Boolean = {
    if (chars.isEmpty == true) true
       else transCount(chars, 0)

 def transCount(chars: List[Char], pro: Int): Boolean = {
  var dif = pro
  chars match {
    case "(" :: Nil => false
    case ")" :: Nil => dif -= 1; if (dif == 0) true else false
    case _ :: Nil => if (dif == 0) true else false

    case "(" :: tail => dif += 1
      transCount(tail, dif)
    case ")" :: tail => dif -= 1;
      if (dif < 0) false
      else transCount(tail, dif)
    case _ :: tail => transCount(tail, dif)
    }
  }
}

I have the type mismatch problem

Error:(30, 13) type mismatch;
 found   : String("(")
 required: Char
       case "(" :: Nil => false
            ^

but really do not know how to fix (do not use char.toList please)

by rockmerockme at March 28, 2015 07:32 PM

Planet Theory

Links: HBR article on Women in STEM and AAUW Report

A latest article on bias issues for women in STEM by Harvard Business Review. 

A slide presentation by AAUW related to their report: Solving the Equation: The Variables for Women’s Success in Engineering and Computing.  The full report can be downloaded for free here.  

by Michael Mitzenmacher (noreply@blogger.com) at March 28, 2015 07:32 PM

StackOverflow

java.util.NoSuchElementException: Column not found ID in table demo.usertable in Cassandra Spark

I am try to write RDD[CassandraRow] into existing Cassandra Table using Spark-cassandra-Connector. Here is my piece of code

val conf = new SparkConf().setAppName(getClass.getSimpleName)
            .setMaster("local[*]")
            .set("spark.cassandra.connection.host", host)
        val sc = new SparkContext("local[*]", keySpace, conf)
val rdd = sc.textFile("hdfs://hdfs-host:8020/Users.csv")
val columns = Array("ID", "FirstName", "LastName", "Email", "Country")
val types = Array("int", "string", "string", "string", "string")
val crdd=rdd.map(p => {
            var tokens = p.split(",")
            new CassandraRow(columns,tokens)
        })
val targetedColumns = SomeColumns.seqToSomeColumns(columns)
crdd.saveToCassandra(keySpace, tableName, targetedColumns,  WriteConf.fromSparkConf(conf))

When I run this code I get following exception

Exception in thread "main" java.util.NoSuchElementException: Column not found ID in table demo.usertable

here is actual schema of table

CREATE TABLE usertable (
  id int,
  country text,
  email text,
  firstname text,
  lastname text,
  PRIMARY KEY ((id))
)

Any suggestion? Thanks

by Hafiz Mujadid at March 28, 2015 07:26 PM

Handle timeout of response in akka-http

In akka-http routing I can return Future as response, that implicitly converts to "ToResponseMarshaller". Is there some way to handle timeout of this future? Or timeout of connection in route level? Or one way is to use Await() function? Right now client can wait response forever.

           complete {
              val future = for {
                response <- someIOFunc()
                entity <- someOtherFunc()
              } yield entity
              future.onComplete({
                case Success(result) =>
                  HttpResponse(entity = HttpEntity(MediaTypes.`text/xml`, result))
                case Failure(result) =>
                  HttpResponse(entity = utils.getFault("fault"))
              })
              future
            }

by diemust at March 28, 2015 07:24 PM

Dealiasing Types in Scala reflection

How can I resolve aliases given a Type? I.e.

import reflect.runtime.universe._

type Alias[A] = Option[Option[A]]
val tpe = typeOf[Alias[_]] 
val ExistentialType(quantified, underlying) = tpe

How do I get Option[Option[_$1]] from underlying (or from tpe)? I know that typeSymbol does resolve aliases, but it seems to lose the parameters in the process:

scala> val tT = typeOf[Alias[_]].typeSymbol
tT: reflect.runtime.universe.Symbol = class Option

scala> tT.asType.toType
res3: reflect.runtime.universe.Type = Option[A]

scala> tT.asType.typeParams
res4: List[reflect.runtime.universe.Symbol] = List(type A)

by Alexey Romanov at March 28, 2015 07:17 PM

Delayed Execution of a series of operations

I'm trying to write a class where when you call a function defined in the class, it will store it in an array of functions instead of executing it right away, then user calls exec() to execute it:

class TestA(val a: Int, newAction: Option[ArrayBuffer[(Int) => Int]]) {
  val action: ArrayBuffer[(Int) => Int] = if (newAction.isEmpty) ArrayBuffer.empty[(Int) => Int] else newAction.get
  def add(b: Int): TestA = {action += (a => a + b); new TestA(a, Some(action))}

  def exec(): Int = {
    var result = 0
    action.foreach(r => result += r.apply(a))
    result
  }

  def this(a:Int) = this(a, None)
}

Then this is my test code:

  "delayed action" should "delay action till ready" in {
    val test = new TestA(3)
    val result = test.add(5).add(5)
    println(result.exec())
  }

This gives me a result of 16 because 3 was passed in twice and got added twice. I guess the easy way for me to solve this problem is to not pass in value for the second round, like change val a: Int to val a: Option[Int]. It helps but it doesn't solve my real problem: letting the second function know the result of the first execution.

Does anyone have a better solution to this?? Or if this is a pattern, can anyone share a tutorial of it?

by Allen Nie at March 28, 2015 07:14 PM

Lobsters

StackOverflow

Scala regular expression : matching a long unicode Devanagari pattern fails

Consider the following script code: import scala.util.matching.Regex

val VIRAMA = "्"
val consonantNonVowelPattern = s"(म|त|य)([^$VIRAMA])".r
// val consonantNonVowelPattern = s"(थ|ठ|छ|स|ब|घ|ण|ट|ज|ग|न|ष|भ|ळ|ढ|ख|श|प|ह|ध|ङ|म|झ|ड|ल|व|र|फ|क|द|च|ञ|त|य)([^$VIRAMA])".r
var output = "असय रामः "
output = consonantNonVowelPattern.replaceAllIn(output, _ match {
  case consonantNonVowelPattern(consonant, followingCharacter) =>
    consonant + VIRAMA + "a" + followingCharacter
})
println("After virAma addition: " + output.mkString("-"))

It produces the following correct output: After virAma addition: अ-स-य-्-a- -र-ा-म-्-a-ः-

However, if I use the longer pattern (commented out above), I get the following wrong output: After virAma addition: अ-स-्-a-य- -र-्-a-ा-म-्-a-ः-

Is this a bug? Am I doing something wrong?

by vishvAs vAsuki at March 28, 2015 07:06 PM

Apache Spark - JDBC Sources

Did anyone managed to pull data out or at least connect to an RDBMS through JDBC with their new feature released in 1.3 using their built-in source for Spark SQL instead of RDDJdbc?

https://databricks.com/blog/2015/03/24/spark-sql-graduates-from-alpha-in-spark-1-3.html

I've tried to apply the example mentioned in the post above but that didn't work as it gives me an error. Thought maybe someone can provide me a full example in scala of how to connect and query the data.

by kraster at March 28, 2015 06:50 PM

loops over the registered variable to inspect the results in ansible

I have an ansible-playbook that creates the multiple ec2 security groups using with_items and register the result.

here is the var file for this playbook:

---
 ec2_security_groups:
   - sg_name: nat_sg
     sg_description: This sg is for nat instance
     sg_rules:
       - proto: tcp
         from_port: 22
         to_port: 22
         cidr_ip: 0.0.0.0/0

   - sg_name: web_sg
     sg_description: This sg is for web instance
     sg_rules:
       - proto: tcp
         from_port: 22
         to_port: 22
         cidr_ip: 0.0.0.0/0
       - proto: tcp
         from_port: 80
         to_port: 80
         cidr_ip: 0.0.0.0/0

and here is the playbook that creates the ec2 security groups:

---


- name: EC2Group | Creating an EC2 Security Group inside the Mentioned VPC
   local_action:
     module: ec2_group
     name: "{{ item.sg_name }}"
     description: "{{ item.sg_description }}"
     region: "{{ vpc_region }}" # Change the AWS region here
     vpc_id: "{{ vpc.vpc_id }}" # vpc is the resgister name, you can also set it manually
     state: present
     rules: "{{ item.sg_rules }}"
   with_items: ec2_security_groups
   register: aws_sg

This works very well but the problem is that, I want to get the group id of each group that this playbook has created for the next task, I have tried it but it failed:

- name: Tag the security group with a name
  local_action:
   module: ec2_tag
   resource: "{{aws_sg.group_id}}"
   region: "{{ vpc_region }}"
   state: present
   tags:
     Name: "{{vpc_name }}-group"
  with_items: aws_sg.results

Can somebody point me that how I can get the group_id for each group from the register result. Thanks

P.S: I can get the value of the group_id for individual sg group like:

aws_sg.results[0].group_id and aws_sg.results[1].group_id etc

by arbabnazar at March 28, 2015 06:49 PM

TheoryOverflow

Evaluating the expected value of negatively correlated random variables

A polynomial random process satisfying the following properties converts a fractional point $(x_1, x_2, \ldots, x_n) \in \mathcal{P}$, $(x_i \in [0,1])$ to a random integer point $(X_1, X_2, \ldots, X_n) \in \mathbb{Z}(\mathcal{P})$, $(X_i \in \{0,1\})$:

  • $\mathbb{E}[X_i]=x_i$, for all $i \in [n]$.
  • For any $S \subseteq [n]$, $\mathbb{E}[\prod_{i\in S} X_i] \leq \prod_{i \in S} x_i$ and $\mathbb{E}[\prod_{i\in S} (1-X_i)] \leq \prod_{i \in S} (1-x_i)$ (negative correlation).

An example is the dependent randomized rounding by Chekuri, Vondrak, and Zenklusen (http://arxiv.org/pdf/0909.4348v2.pdf).

Now, the question is given a point $x \in \mathcal{P}$ and assuming that $X$ is the outcome of the random process, can we evaluate the value of $\mathbb{E}[\prod_{i\in S} X_i]$ or $\mathbb{E}[\prod_{i\in S} (1-X_i)]$ for some $S \in [n]$ with e.g. Chernoff bound on the calculated value? Shall we use sampling for this purpose?

That would be great if you can comment on this.

by salmAn at March 28, 2015 06:47 PM

CompsciOverflow

How to create DFA from regular expression without using NFA?

Objective is to create DFA from a regular expression and using "Regular exp>NFA>DFA conversion" is not an option. How should one go about doing that?

I asked this question to our professor but he told me that we can use intuition and kindly refused to provide any explanation. So I wanted to ask you.

"Regular exp>NFA>DFA conversion" is not an option because such a conversion takes a lot of time to convert a rather complex regular expression. For example, for a certain regex "regex>NFA>DFA" takes 1 hour for a human being. I need to convert regex to DFA in less than 30 minutes.

by user4220128 at March 28, 2015 06:46 PM

/r/scala

/r/emacs

Issues with org-babel-load-file

I am trying to use org mode to manage my init.el file. I tried the minimal approach listed here:

http://stackoverflow.com/questions/19336489/initializing-emacs-with-org-babel-debugger-entered-lisp-error-void-function

And I get this:

Debugger entered--Lisp error: (wrong-type-argument stringp nil) expand-file-name(nil) load-file(nil) org-babel-load-file("~/emacs-and-org-init.org") 

EDIT:

It works now. I had some incorrect syntax somewhere.

submitted by excitedaboutemacs
[link] [comment]

March 28, 2015 06:43 PM

StackOverflow

Learning Clojure - What should I know about Java and more

I have started learning Clojure recently, my main programming language is Ruby and I have no Java experience whatsoever.

  • Which Java standard classes are a must to know when working with Clojure?

    Obviously Clojure doesn't come with a wrapper for everything and a lot of functionality is provided by Java's libraries.

    There's like a gazillion of them on javadocs - which ones should I explore?

  • Where do I look for and how do I install third party libraries (clojure and java ones)?

    In Ruby I'd visit Rubyforge or Rubytoolbox, git etc. and then just gem install the package I found interesting.

  • Which editor/ide for Clojure would you recommend (with the lowest learning curve)?

    I am trying to use Netbeans with Enclojure and paradoxally it's my biggest obstacle so far:

    Each generated project has some xml files, folders, library dependencies etc. the purpose of I have no clue.

    I am doing some labrepl exercises and wanted to try out some of the bundled libraries separately in a new project, but this simple task I cannot accomplish :/

  • How to distribute clojure programs?

    This is pretty much related with the question above.

  • Are there any clojure community driven blogs with news, code tips etc?

by kartan at March 28, 2015 06:33 PM

QuantOverflow

Garch for covariance matrix?

I have seen plenty of literature about GARCH on estimation volatility. how about covariance? There are plenty of risk models depending on the covariance matrix.

I guess we can assume the correlation is constant and volatility changes. But in reality in super volatile moment correlation between stocks increases.

Or there is a separate model for estimating correlation?

by CodeNoob at March 28, 2015 06:33 PM

StackOverflow

How can I get the var of a multimethod?

I'm trying to use dire to add hooks to multimethods. The author says it might not work. Here is an example with a normal function:

(ns mydire.prehook
  (:require [dire.core :refer [with-pre-hook!]]))

(defn times [a b]
  (* a b))

(with-pre-hook! #'times
  "An optional docstring."
  (fn [a b] (println "Logging something interesting.")))

(times 21 2) ; => "Logging something interesting."

As you can see, with-pre-hook! is passed (var times) (which is the same as #'times).

The problem is that when calling var for a multimethod I'm getting an exception: clojure.lang.PersistentList cannot be cast to clojure.lang.Symbol

Is there a way to make this work?

Below is my code sample:

(defmulti get-url identity)

(defmethod get-url :stackoverflow
  [site]
  "http://stackoverflow.com")

(with-pre-hook! (var (get-method get-url :stackoverflow))
  (fn [x] (println "getting url for stackoverflow.")))

by Francisco Dibar at March 28, 2015 06:20 PM

Intellij code style to align single-line comments

Right now IntelliJ's autoformat changes this:

    val reduceFn = (left: U, right: U) => {
      left ++ right                         // comment 1
              .myFuncA( _._1 )              // comment 2
              .myFuncABC {                  // comment 3
                g => {                      // comment 4
                  g.myFun                   
                  ._2                       
                  .myFunBBB( 0 )( _ + _ )   
                }
              }
    }: U                                    // comment 5

to this:

    val reduceFn = (left: U, right: U) => {
      left ++ right // comment 1
              .myFuncA( _._1 ) // comment 2
              .myFuncABC {
                // comment 3
                g => {
                  // comment 4
                  g.myFun 
                  ._2 
                  .myFunBBB( 0 )( _ + _ ) 
                }
              }
    }: U // comment 5

Is there a way I can tell IntelliJ to produce, or, at the very least, not clobber the former style? I don't see comments as an option in Code Style in Editor > Code Style > Scala:

enter image description here

by P. Myer Nore at March 28, 2015 06:18 PM

Planet FreeBSD

Using the arswitch ethernet switch on FreeBSD

I sat down a few weeks ago to make the AR8327 ethernet switch work and in doing so I wanted to add per-port and 802.1q VLAN support. It turned out that I .. didn't know as much I thought I did about the etherswitch support. So, after a whole bunch of trial-and-error, I wrapped my head around things. This post is mostly a braindump so if I do forget I have something written down about it - at least until I turn it into a FreeBSD manpage.

There's three modes:
  • default - all ports are in the same VLAN;
  • per-port - each port can be in a VLAN 'group';
  • dot1q - each port can be in multiple VLAN groups, with 802.1q tagging going on.
The per-port VLAN group is for switches that don't have an arbitrary VLAN table - you just assign each port an ID from some low set of values (say, 16), and then the VLAN tag can either be added or not added. I think the RTL8366 switch is like this, but I'd have to check.

The dot1q VLAN is for switches that support multiple VLANs, each can have an arbitrary VLAN ID (0..4095) with optional other VLAN options (like tag-in-tag support.)

The etherswitch configuration side has a few options and they're supported by different hardware:
  • Each port has a port VLAN ID - this is the "native port" for dot1q support. I don't think it has any particular meaning in the per-port VLAN code in arswitch but I could be terribly wrong. I thought it did when I initially did the port, but the documentation is .. lacking.
  • Then there's a set of per-port flags - eg q-in-q, 802.1q tagging, etc.
  • Then there's the vlangroup - each vlangroup has a vlan ID, and then a set of port members. Each port member can be tagged or untagged.
This is where things get odd.

Firstly - the AR934x SoC switch support doesn't include VLANs. I need to add that. I'm not sure which side of the wall this falls.

The switches previous to the AR8327 support per-port and VLAN configuration, but they don't support per-port-per-VLAN tagging. Ie, you can configure 802.1q VLANs, and you can enable tagging on the port - but it tags all packets that aren't the port 'VLAN ID'.

The per-port VLAN ID seems ignored by the arswitch code - it's only used by the dot1q support.

So I think (and it hasn't yet been tested) that on the earlier switches, I can use per-port VLANs with tagging by:
  • Configuring per port vlans - "etherswitch config vlan_mode port"
  • Adding vlangroups as appropriate with membership - tag/untag doesn't matter
  • Set the CPU port up to have tagging - "etherswitch port0 addtag"
When configuring dot1q VLANs, the mode is "config vlan_mode dot1q" and the 802.1q VLAN IDs are used, but the above still holds - the port is tagged or untagged.

But on the AR8327, the VLAN map hardware actually supports enabling/disabling tagging on a per-port-per-VLAN basis. Ie, when the VLAN table is programmed with the port membership, it takes a list of both the ports and whether the ports are tagged/untagged/open/filtered. So, I don't think per-port VLAN tagging works - only dot1q tagging. Maybe I can make it work, but I haven't really sat down for long enough with the documentation to see what combinations are required.
  • Configure the hardware - "etherswitch config vlan_mode dot1q"
  • Add vlangroups as appropriate, set pvid as appropriate
  • For each vlangroup membership, the port can be tagged or untagged - eg to tag the cpu port 0, you'd use '0t' as the port member. That says "port0 is a member, and it's tagged."
I still have a whole lot more to add - the ingress/egress filters aren't configurable, the per-port vlan stuff needs to be made much more sensible and consistent - and the AR934x SoC switch needs to support VLANs. Oh, and much more documentation. But, hey, I can get the thing spitting out VLAN tags, so when it's time to setup my home network with some VLANs, i'll be sure to document what I did and share it with everyone.

by adrian at March 28, 2015 06:18 PM

CompsciOverflow

Why is this sequence not acceptable for this DFA?

Why is the sequence 1100101 not acceptable for this DFA? Is it because once it reaches the final state 'q3' it can't go back to 'q1'?enter image description here

by Koz at March 28, 2015 06:09 PM

/r/compsci

Fefe

Wisst ihr, was das eigentliche Problem bei dem Germanwings-Absturz ...

Wisst ihr, was das eigentliche Problem bei dem Germanwings-Absturz ist? Männer.
Amoktrips sind Männersache. Und die Lufthansa hat 94 Prozent männliche Piloten. Das sollte sie ändern, meint Luise Pusch. 14 der 16 im Airbus zerschellten "Schüler" sind Schülerinnen und die zwei "Lehrer" sind Lehrerinnen. Die Opfer sind überwiegend Frauen, die Täter sind männlich.

March 28, 2015 06:00 PM

Wieso nennen die eigentlich jetzt alle den Namen des ...

Wieso nennen die eigentlich jetzt alle den Namen des Germanwings-Copiloten? Gibt es da keine journalistischen Ethik-Standards, die das verbieten? Es hilft der Story jedenfalls inhaltlich genau gar nicht weiter und exponiert die Familie ohne Not.

March 28, 2015 06:00 PM

Der Gefängnisausbruch des Jahres ist in England einem ...

Der Gefängnisausbruch des Jahres ist in England einem Insassen von Wandsworth Prison gelungen. Der hat ein hereingeschmuggeltes Mobiltelefon benutzt, um der Gefängnisleitung eine Email zu schreiben, die aussah, als käme sie von einem Gerichtsangestellten, und die seine Freilassung auf Bewährung anordnete. Da waren auch die Staatsanwaltschaft und sogar der Richter beeindruckt:
Prosecutor Ian Paton said: "A lot of criminal ingenuity harbours in the mind of Mr Moore. The case is one of extraordinary criminal inventiveness, deviousness and creativity, all apparently the developed expertise of this defendant".

The judge, Recorder David Hunt QC, described the behaviour as "ingenious" criminality.

March 28, 2015 06:00 PM

Wisst ihr, was das eigentliche Problem bei dem Germanwings-Absturz ...

Wisst ihr, was das eigentliche Problem bei dem Germanwings-Absturz ist? Der Islam. (Vorsicht: hinter der zitierten "German news website" verbirgt sich ein Forenkommentator bei PI-News).

March 28, 2015 06:00 PM

/r/emacs

Emacs flymake with LaTeX and LaTex \include

Flymake supports LaTeX out of the box. However, when composing a document that is to be included into another via \include, there is no preamble in the file being modified, so flymake marks everything as an error.

Is there a solution to this?

submitted by 78666CDC
[link] [1 comment]

March 28, 2015 05:57 PM

CompsciOverflow

Integer linear program for Ising ground state problem

I am trying to model the Ising spin state problem into Integer linear program and find the optimal ground state using lp_solve. (This is just a miniature version of Ising state problem)

$$ maximise: \sum J_{ij}S_{i}S_{j} $$ $$ -1\leq J_{ij} \leq 1 $$ $$ S_{i},S_{j} \epsilon (-1,1) $$ Value of $J_{ij}$ is given. The goal is to find optimal values of $S_{i}$ to maximise the value. For ex: $J_{12}=1, J_{13}=-1, J_{23}=-1$. One of the solutions for maximum energy is 3 with $S_{1}=1, S_{2}=1, S_{3}=-1$.

I am finding it difficult to convert this into integer linear program.

This is my initial approach for the conversion. I tried to take an aditional variable $X_{i}$ and convert this program as $$ maximise: \sum X_{i} $$ $ if((S_{i}=-1 or S_{j}=-1) and J_{ij}=-1) \implies X_{i}=1 $ $ if((S_{i}=-1 or S_{j}=-1) and J_{ij}=1) \implies X_{i}=-1 $ $ if((S_{i}=-1 and S_{j}=-1) and J_{ij}=-1) \implies X_{i}=-1 $ $ if((S_{i}=-1 and S_{j}=-1) and J_{ij}=1) \implies X_{i}=1 $

I dont know if this approach is correct or not. Even i dont know how to convert this to linear program. Any suggestion or help is greatly appreciated.

by suhastheju at March 28, 2015 05:52 PM

QuantOverflow

What is the gross accounting relation of Cobb-Douglas function?

We have Cobb-Douglas function like this $Y=AK^\alpha L^{1-\alpha}$, in one of the book, it deduce like this: enter image description here

How can we get this formula? $$\frac{\Delta Y}Y = \frac{\Delta A}A+\alpha\frac{\Delta K}K+(1-\alpha)\frac{\Delta L}L$$

by ZHI at March 28, 2015 05:45 PM

CompsciOverflow

Help with understanding Simulated Annealing algorithm

I'm trying to wrap my head around it, but no matter what I read, I still can't fully understand it.
I tried to read a little bit about the annealing process in physics, but I have no background whatsoever in physics, let alone in thermodynamics, so I couldn't understand what is it exactly, and how it fits into the algorithm.
Here is the algorithm:

enter image description here

In the Hill Climbing algorithm, the reasoning can be easily described: Of all the successors of the current state - choose the highest-valued. But in Simulated Annealing... well, I can see what the algorithm does, I just don't understand the reasoning behind it:
1. It starts a timer.
2. It chooses a random successor.
3. It evaluates how "far away" the randomly chosen successor from current.
4. If the successor is indeed a "progress in the right direction" ($\Delta E > 0$) then we move ahead towards the direction of successor; otherwise, we move ahead towards the direction of successor with some (odd) probability that depends on the timer(?).

Why is randomly choosing a successor better then the Hill Climbing method?
Can someone please explain the reasoning behind it?
Do I really have to understand the annealing process? If so, can someone please explain it in layman terms?

by so.very.tired at March 28, 2015 05:17 PM

QuantOverflow

the hedging behind the decomposition of american put options

Now I'm reading a paper:"alternative characterizations of american put options" , the authors are Carr,Jarrow,Myneni

http://www.math.nyu.edu/research/carrp/papers/pdf/amerput7.pdf

After theorem 1 (in page 4),the author said :

enter image description here

I don't quite understand why the "investor" should hedge the put option when the stock price is below the boundary. I think only the "writer" of the option should hedge,but not the "investor".What's the meaning of this paragraph?

by paradox at March 28, 2015 05:13 PM

CompsciOverflow

A logic function that is true iff the first operand is less than the second operand

In my computer organization class I have been given a series of problems. One I'm stuck on currently is below:

Assume that $X$ consists of 4 bits, $x_3 x_2 x_1 x_0$, and $Y$ consists of 4 bits, $y_3 y_2 y_1 y_0$. Write logic functions that are true if and only if

(a) $X < Y$, where $X$ and $Y$ are thought of as unsigned binary numbers.

(b) $X < Y$, where $X$ and $Y$ are thought of as signed (two’s complement) numbers.

(c) $X = Y$.

(d) Use a hierarchical approach that can be extended to larger numbers of bits. Show how can you extend it to 8-bit comparison (that is, if $X$ and $Y$ are 8-bit numbers, how to implement the above three comparisons).

For all of them I understand the what makes each case true. I'm even aware of the simple method of writing a truth table and listing out the logic functions that make each case true, however that approach would make a table 256 rows tall.

I'm stuck a bit on how to write out the logic. The real confusion actually comes from the TA in the class. He gave an example using 3-bit numbers. I believe he was using the case of $X < Y$ still. His solution was: $(x_2 \;\mathrm{XOR}\; y_2)' \cdot (x_1 \;\mathrm{XOR}\; y_1)' \cdot (x_0' \cdot y_0)$

Is this the full answer for the case of 3-bit numbers for part (a)? I understand it this solution, but I feel there is more. For example, $(x_2 \;\mathrm{XOR}\; y_2)'$ checks if the most significant bits are equal, which gives reason to move onto the next portion of the function, but if $x_2$ is 0 while $y_2$ is 1, then that means $x$ is smaller and the function needs to become true, but I don't see how that is a possible outcome with that example solution.

by entimaniac at March 28, 2015 05:10 PM

StackOverflow

To start REPL in user defined namespace

(in-ns 'dbx) Writing this code to some file and loading it isn't changing the default namespace of cygwin/console. its still user=> not dbx=>. How can we start REPL in namespace defined in some script file. How can this be achieved?enter image description here

by vikbehal at March 28, 2015 04:57 PM

Passing json result to view in Play/Scala

Model -

case class Renting(name: String, pets: Int)
case class Resident(renting: List[Renting])
case class Location(residents: List[Resident])

View -

@(jsonResults: List[Renting])

@jsonResults.map { json =>
  Name: @json.name
  Pets: @json.pets
}

Controller -

val json: JsValue = Json.obj(
  "location" -> Json.obj(
    "residents" -> Json.arr(
      Json.obj(
        "renting" -> Json.arr(
          Json.obj(
            "name" -> "John Doe",
            "pets" -> 2
          ),
          Json.obj(
            "name" -> "Jane Smith",
            "pets" -> 1
          )
        )
      )
    )
  )
)

implicit val rentingFormat = Json.format[Renting]
implicit val residentFormat = Json.format[Resident]
implicit val locationFormat = Json.format[Location]

(json \ "location").validate[Location] match {
  case s: JsSuccess[Location] => {
    val location: Location = s.get
    /* Something happens here that converts Location to List[Renting] */
    Ok(views.html.index(location))
  }
  case e: JsError => Ok(JsError.toFlatJson(e))
}

Based on the s.get.toString output, it seems that the json is being properly traversed; however, I need to change the type from Location to List[Renting] so that I can pass the result into the view. Any help would be greatly appreciated!

by Andy at March 28, 2015 04:56 PM

How can I add a PPA repository using Ansible?

I'm trying to add a new repository to a server so that I can install Java by Ansible. Unfortunately whenever I try to run the playbook it fails because of a GPG error. Can somebody explain what is going wrong here and what I need to do in order to fix this?

I'm using Ansible 1.7.2 and currently only connecting to localhost.

I have a very simple Playbook that looks like this:

- hosts: home
  tasks:
   - name: Add repositories
     apt_repository: repo='ppa:webupd8team/java' state=present

When I try to execute it, I get the following error:

sal@bobnit:~/Workspace$ ansible-playbook --ask-sudo-pass basic.yml 
sudo password: 

PLAY [home] ******************************************************************* 

GATHERING FACTS *************************************************************** 
ok: [localhost]

TASK: [Add repositories] ****************************************************** 
failed: [localhost] => {"cmd": "apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 7B2C3B0889BF5709A105D03AC2518248EEA14886", "failed": true, "rc": 2}
stderr: gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com
gpg: no writable keyring found: eof
gpg: error reading `[stream]': general error
gpg: Total number processed: 0

stdout: Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --homedir /tmp/tmp.HKDOSZnVQP --no-auto-check-trustdb --trust-model always --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyring /etc/apt/trusted.gpg.d/steam.gpg --keyring /etc/apt/trusted.gpg.d/ubuntu-x-swat_ubuntu_x-updates.gpg --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 7B2C3B0889BF5709A105D03AC2518248EEA14886

msg: gpg: requesting key EEA14886 from hkp server keyserver.ubuntu.com
gpg: no writable keyring found: eof
gpg: error reading `[stream]': general error
gpg: Total number processed: 0

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/sal/basic.retry

localhost                  : ok=1    changed=0    unreachable=0    failed=1   

by Salim Fadhley at March 28, 2015 04:45 PM

How to test if two numbers are close in Clojure

What is the idiomatic way to if two numbers are close to each other in Clojure?

Somewhere along the line of:

(deftest sqrt-test
   (is (~= 1.414 (Math/sqrt 2)))

by Chris at March 28, 2015 04:36 PM

QuantOverflow

Implied Vol vs. Calibrated Vol

Consider the Black-Scholes model, in which the log stock return over a time period $\Delta t$ is given by

$$ \log(S_{i+1}/S_i) = (\mu - \sigma^2/2)\Delta t + \sigma \sqrt{\Delta t} Z_i, \qquad Z_i \sim \mathcal{N}(0,1). $$

The price of a call at time $T$ under this model (when we replace $\mu$ with $r$) is given by (emphasizing the dependence on $\sigma$)

$$ C(\sigma) = SN(d_1) - Ke^{-rT}N(d_2), $$

where

$$ d_1 = \frac{1}{\sigma{\sqrt{T}}}\left(\log(S/K) + (r + \sigma^2/2)T\right) = d_2 + \sigma \sqrt{T}. $$

Now, assuming $r$ is known, we have (at least) two methods of estimating $\sigma$, namely using a least-sqaures regression on the log returns, or calculating the implied vol.

Regression on log returns:

Note the log returns is a linear regression equation of the form

$$ Y_i = \beta_0 + \beta_1X_i + \sigma\sqrt{\Delta t} \epsilon_i $$

with $\beta_0 = (\mu - \sigma^2/2)\Delta t$, $\beta_1 = 0$ and $\epsilon_i \sim \mathcal{N}(0,1)$, independent. So, assuming we have a sample of $N$ log returns (denoted $Y_i$) and since $\beta_1 = 0$, we estimate $\beta_0$ in the usual regression way by

$$ \hat{\beta_0} = \frac{1}{N}\sum_{i=1}^N Y_i, $$ and then estimate $\sigma$ using the standard deviation of the residuals,

$$ \hat{\sigma} = \frac{std(Y_i - \hat{Y_i})}{\sqrt{\Delta t}}, $$

where the $\hat{Y_i}$ are the regression model-predicted log returns. This is one method to estimate $\sigma$ used in the pricing equation, and is in the least-sqaures sense our "best guess" at $\sigma$. This $\hat{\sigma}$ could then be used to compute all European call options for $S$ across all strikes and expirations.

Implied vol:

Given a market call price $C_{\text{observed}}$ for some strike and expiration, we can compute the $\sigma_{\text{implied}}$ such that $C(\sigma_{\text{implied}}) = C_{\text{observed}}$. We can compute such a $\sigma_{\text{implied}}$ for all call options we have prices for (again assuming $r$ is known). Then, when we would like to price a call using our pricing equation for some strike/expiration that is not observed, we can choose (or interpolate between a few) the $\sigma_{\text{implied}}$ that is closest to the strike/expiration we would like to price at and use this $\sigma_{\text{implied}}$ in our pricing equation.

So, we have two methods of deriving a suitable $\sigma$ to use in our pricing equation. It seems that much of the literature is devoted to implied vol, so I assume this is the preferred technique. My question is, is there any relation between the two, and when would you use one over the other?

by bcf at March 28, 2015 04:31 PM

/r/emacs

Is there a way to kill all generated buffers?

Obviously they are needed for a time, but I get a lot of helm buffers that are left over.

submitted by excitedaboutemacs
[link] [6 comments]

March 28, 2015 04:31 PM

QuantOverflow

How to estimate CVA by valuing a CDS of the counterparty?

I'm trying to estimate CVA of one of my derivatives by valuing a credit default swap (CDS) of my counterparty. However, I don't know how to set up the CDS deal (notional amount, maturity, etc.).

Thanks!

by Carlos F. at March 28, 2015 04:22 PM

/r/netsec

CompsciOverflow

How to prove that these two languages are regular, or not regular?

I have these two languages

$L_1={\{a^n b^m,n≥m+5,m>0}\}$ Where $∑=(a,b)$

$L_2={\{a^n b^m,n≥m+5,m≤5}\}$ Where $∑=(a,b)$

As you can see that there is only one difference, the condition of m is different.

My question is to determine whether these two languages are regular or not, and to prove it.

If one or both these languages are non-regular. then how can we prove that using pumping lemma?

These two languages have so many conditions, I am just confused that how to solve these kind of Languages.

by CSstudent at March 28, 2015 04:10 PM

Lobsters

CompsciOverflow

How to create an LR($k$) grammar for a given language?

How can I construct an LR($k$) grammar (which $k$ is not really of interest for now) for a given language, say:

$$L=\left\{w \in \{a, b\}^\star \ | \ \exists u \in \{a, b\}, v \in \{a, b\}^\star: w = vu \wedge |w|_u \operatorname{mod} 2 = 0\right\}$$

($|w|_u$ denotes the number of $u$'s in $w$).

Please note that I don't want a right or left linear grammar although I KNOW that the language is regular. I am interested in the process of creating an LR($k$) grammar for an arbitrary, in general more complex language. (Feel free to ignore this example.)

by lukas.coenig at March 28, 2015 03:33 PM

Is this grammar LR(1)?

A bit confused about whether this grammar is ambiguous or not

C' -> C
C -> d C u C
C -> d C
C -> ε

I tried building the DFA for this but I get this in one of the states:

C -> d C DOT u C, $
    C -> d C DOT, $

Isn't this a shift-reduce conflict, so surely it means the grammar is not LR(1)? Or does it reduce regardless since $ and u are both in the follow set of C? Is this not a reduce-reduce conflict since it goes to C regardless?

by Mishaal Hasan at March 28, 2015 03:32 PM

Can we build a nondeterministic decider PDA using two PDAs accepting a language and its complement?

When talking about turing machines, it can be easily shown that starting from two machines accepting $L$ and its complement $L^c$, one can build a machine which can fully decide if a word is inside $L$ or not.

But what about PDAs? starting from two different PDAs, one accepting $L$ and one accepting $L^c$ can we build another PDA, which accepts $L$, and only crashes or halts in non-final states (rejects) when $w\notin L$?

by Ali.S at March 28, 2015 03:31 PM

QuantOverflow

Are CME security id's unique and constant over time?

For any given day, CME security IDs are unique - a number will always refer to a single product.

Are they unique over time as well? That is, might a new security have a security id that used to be used by an expired one?

And are they constant? That is, does a given security keep the same security id over its lifetime?

by Cookie at March 28, 2015 03:22 PM

Dave Winer

Who drives you?

Silicon Valley proposes to drive our cars for us. I guess that's okay.

Silicon Valley theorizes about putting our minds in software containers and storing them in computer memory, to be woken up when there's something for us to do, or think about, or perhaps see.

But what would there be to see or do or think about in a world where everyone else is in a computer's memory. Have you seen many beautiful megabytes recently? Honey come look at the glorious pixel I just found. Of course you'll only be able to "see" it from the other side of the glass, looking out.

Maybe we'll be able to hear each others' thoughts?

There's a watch coming. But it will be a short-lived product, because soon we won't have wrists to put them on.

So in a world where we're woken up to see or think, or do something, but there's nothing to see or do, or think about, I guess there won't be any reason to wake up.

When it's all said and done, we might just end our species because we ran out of things to think about or do or see. Something to think about. Or do something about.

PS: I think this whole upload-your-brain thing is a trick. Once they get you up there, they'll just turn the computer off and that's that.

March 28, 2015 03:19 PM

StackOverflow

Scala Find Method Syntax

val underlying: MongoDBCollection
def find(doc: DBObject): DBCursor = underlying find doc

Here's a hypothetical program. This is apparently a valid implementation of the find method, but I don't understand how the method body underlying find doc could produce a value. How does the find method evaluate the doc parameter, and how is the underlying variable affecting anything? Why not find doc underlying or just find doc as the method body?

by RoyalCanadianKiltedYaksman at March 28, 2015 03:14 PM

Invoking curried function

Below is an implementation of a curried function :

scala> def multiply(x: Int, y: Int) = x * y
multiply: (x: Int, y: Int)Int

scala> def multiplyCurried = (multiply _).curried
multiplyCurried: Int => (Int => Int)

When I a attempt to implement multiplyCurried I receive exception :

<console>:10: error: missing parameter type
              multiplyCurried(a => b => a * b)

What is correct implementation to invoke multiplyCurried ?

by blue-sky at March 28, 2015 03:09 PM

CompsciOverflow

Showing that $\{ c^n a^m b^{n+m} : n+m \geq 6\}$ is not regular

I'm trying to show that $L_6=\{c^n a^m b^p : n+m=p,p \geq 6\}$ is not regular. I need a little help, I was practicing the pumping lemma, and I encountered this language, I saw these conditions and got totally confused, what to do now?

Earlier I showed that $L_5=\{a^n b^n : n≥0\}$ is not regular. In this Language it was very simple to choose $w$, namely $w= a^pb^p$, where $p$ is the pumping length. But this new Language is complicated, so I thought you guys could help me out.

by CSstudent at March 28, 2015 03:02 PM

DataTau

StackOverflow

How to prevent java.lang.OutOfMemoryError: PermGen space at Scala compilation?

I have noticed a strange behavior of my scala compiler. It occasionally throws an OutOfMemoryError when compiling a class. Here's the error message:

[info] Compiling 1 Scala source to /Users/gruetter/Workspaces/scala/helloscala/target/scala-2.9.0/test-classes...
java.lang.OutOfMemoryError: PermGen space
Error during sbt execution: java.lang.OutOfMemoryError: PermGen space

It only happens once in a while and the error is usually not thrown on the subsequent compile run. I use Scala 2.9.0 and compile via SBT.

Does anybody have a clue as to what might be the cause for this error? Thanks in advance for your insights.

by BumbleGee at March 28, 2015 02:56 PM

Returning the same value type, but with success/failure

Let's say I have a method..

def foo(b: Bar): Try[Bar] = ???

Try is just a placeholder here. foo does something with Bar, then returns a value to indicate success/failure. I want to return the original value with the success/failure indication, so when I have a collection, I can know which ones failed and succeeded, and do something with them. Try doesn't really work for me, because Failure wraps an exception (let's say I don't care about the reason why it failed).

I could maybe return Either[Bar, Bar], but it seems redundant to repeat the type parameter.

Are there better alternatives than this?

by ginov at March 28, 2015 02:56 PM

Parsing string for retrieving date in UNIX and freeBSD

I have a string in following format: YYYY-MM-DD

how can i convert it to date on UNIX and freeBSD? I know it is

date -d $VAR +'%Y-%m-%d' on GNU

and

date -jf $VAR +'%Y-%m-%d' on freeBSD

But what if my script (in sh, not in bash) have to work on both OS? How can I combine it?

by Mirator at March 28, 2015 02:51 PM

routing configuration for akka cluster with remote nodes

I have several remote nodes that stay on different computers and connected in cluster. So, it is logging system on one of the nodes with role 'logging', that write logs in db. I chose to use routing for delivering messages to logger from other nodes. I have one node with main actor and three child actors. Each of them must send logs to logger node. My configuration for router:

akka.actor.deployment {
  /main/loggingRouter = {
    router = adaptive-group
    nr-of-instances = 100
    cluster {
      enabled = on
      routees-path = "/user/loggingEvent"
      use-role = logging
      allow-local-routees = on
    }
  }
  "/main/*/loggingRouter" = {
    router = adaptive-group
    nr-of-instances = 100
    cluster {
      enabled = on
      routees-path = "/user/loggingEvent"
      use-role = logging
      allow-local-routees = on
    }
  }
}

And I create router in each actor with this code

val logging = context.actorOf(FromConfig.props(), name = "loggingRouter")

And send

logging ! LogProtocol("msg")

After that logger receives messages only from one child actor. I don't know how to debug it, but my guess that I apply wrong pattern for this.

What is the best practice for this task? Thx.

Actor from logger node:

system.actorOf(Logging.props(), name = "loggingEvent")

by diemust at March 28, 2015 02:27 PM

TheoryOverflow

Confusion Regarding the Formal Definition of P/Poly and SAT-Languages [on hold]

My understanding of the Circuits and Circuit family of Size(F(n)) is as follows:

  1. A Circuit family of Size(F(n)) is a set of circuits, the ith Circuit has i input Lines and 1 Output Line, with the maximum circuit size = Size(F(n)).
  2. Given a Binary Input String of Length=N, the Circuit family accepts the String iff the Nth Circuit from the Circuit family Outputs 1 on the given Input.
  3. The Circuit family is in P/Poly iff Size(F(n)) <= (N^k) for some constant k.

Assuming a given Circuit family of Size(F(n)). Source of Confusion [SAT vs P/Poly]:

Q1. Let us assume a random 3-SAT on N-Bit Inputs. What does it mean when we say 3-SAT is accepted by the given Circuit family? Does that mean that the Input to the given Circuit family is the Description of 3-SAT and the Output is 1 if 3-SAT is satisfiable on any 1 of all possible input and 0 otherwise?

Q2. Assuming we have a circuit for N-Bit SATs, will that ‘exact same circuit’ accepts all the exponential number of 3-SAT’s on N-Bit Inputs that are possible?

Q3. What does it mean by that Circuit family to be Uniform?

by TheoryQuest1 at March 28, 2015 02:23 PM

QuantOverflow

Equity Chart - design and granularity

I am looking to build a web based Equity chart to display performance of FX trading strategies.

I would like to hear opinions and advice on a few areas that I am unsure about.

Granularity

Equity can - and typically does - change every tick. Should I therefore save equity every tick? If I do I am likely to be saving a lot of data! And then the display of this data will also be a challenge - as there would be a lot of noise.

If I am to save a snapshot every moment in time, what would be a recommend timeframe? Every minute?

Optimizing for download

As the amount of data in the equity chart could be quite large, what are some recommend approaches to optimize for download? Would it be advisable to somehow smooth the equity curve and download just a vector line rather than downloading a csv/json with many thousands of datapoints?

Thanks for any feedback - its really appreciated.

by Magick at March 28, 2015 02:22 PM

StackOverflow

Good example of implicit parameter in Scala?

So far implicit parameters in Scala do not look good for me -- it is too close to global variables, however since Scala seems like rather strict language I start doubting in my own opinion :-).

Question: could you show a real-life (or close) good example when implicit parameters really work. IOW: something more serious than showPrompt, that would justify such language design.

Or contrary -- could you show reliable language design (can be imaginary) that would make implicit not neccessary. I think that even no mechanism is better than implicits because code is clearer and there is no guessing.

Please note, I am asking about parameters, not implicit functions (conversions)!

Updates

Global variables

Thank you for all great answers. Maybe I clarify my "global variables" objection. Consider such function:

max(x : Int,y : Int) : Int

you call it

max(5,6);

you could (!) do it like this:

max(x:5,y:6);

but in my eyes implicits works like this:

x = 5;
y = 6;
max()

it is not very different from such construct (PHP-like)

max() : Int
{
  global x : Int;
  global y : Int;
  ...
}

Derek's answer

This is great example, however if you can think of as flexible usage of sending message not using implicit please post an counter-example. I am really curious about purity in language design ;-).

by greenoldman at March 28, 2015 02:03 PM

QuantOverflow

How to forecast bond price with time series

I have the goal of being able to develop a model that can forecast the future prices of european government bond (or other private bonds), particularly from the historical prices and returns of the bonds. However I do not know really how to start. In fact I have just graduated in quantitative finance course, but I never deepened so the thing to be able to develop a model. Someone has advices to know how to start? Maybe simply pointing some papers dealing with the subject. I would use something to not only theoretical, but that explains step by step how to build the model. I conclude by saying that I know well enough R and C language, so I might even consider a model based on monte carlo simulation for example.

by Giacomo Rosaspina at March 28, 2015 01:55 PM

DragonFly BSD Digest

In Other BSDs for 2015/03/28

It’s been a quiet week in BSD-land, at least in terms of me finding links.

by Justin Sherrill at March 28, 2015 01:48 PM

StackOverflow

How to push data to enumerator

The code below is a simple example of actor that would like to communicate with a remote server using an enumerator. It should be possible to push new data to the enumerator. However, I'm not sure how to do so. I found a solution in this question, but the the Enumerator.imperative is deprecated and according to the Play! docs it seems that it was replace by Concurrent.unicast, which doesn't have the push method.

// WorkerActor
val stdin = Concurrent.unicast[Array[Byte]](onStart = channel => {
  channel.push("uname -a\n".toCharArray.map(_.toByte)) // First message
}) >>> Enumerator.eof

attachStream(stdin)

def receive = LoggingReceive {
  case message: Array[Byte] =>
    // TODO: push the message to the stream
    // stdin push message ?
    ...
}

Thank you for any help you can provide.

by Grant at March 28, 2015 01:43 PM

File lookup() relative to playbook

I am currently using lookup() function in role tasks.yml to get input from files for shell commands.

Is there a way to make lookups relative to the playbook file (project root folder) instead of role itself? I'd rather store files on playbook level.

by Mikko Ohtamaa at March 28, 2015 01:29 PM

How can I get the functionality of the Scala REPL :type command in my Scala program

In the REPL there's a command to print the type:

scala> val s = "House"
scala> import scala.reflect.runtime.universe._
scala> val li = typeOf[List[Int]]
scala> :type s
String
scala> :type li
reflect.runtime.universe.Type

How can I get this ":type expr" functionality in my Scala program to print types?

Let me clarify the ":type expr" functionality I would like to have, something like this:

println(s.colonType)   // String
println(li.colonType)  // reflect.runtime.universe.Type

How can I get such a "colonType" method in my Scala program outside the REPL (where I don't have the :type command available)?

by user1525911 at March 28, 2015 01:26 PM

/r/netsec

StackOverflow

scalaz-stream tcp `echo` app not work

I write an echo app that send and receive '\0' terminated string

https://gist.github.com/jilen/10a664cd588af10b7d09

object Foo {

  implicit val S = scalaz.concurrent.Strategy.DefaultStrategy
  implicit val AG = tcp.DefaultAsynchronousChannelGroup
  ...

  def runServer() {
    def writeStr(str: String) = tcp.write(ByteVector(str.getBytes))
    val echoServer = (readStr |> serverLog("[Server] Receiving")).flatMap(writeStr)
    val server = tcp.server(addr, concurrentRequests = 1)(echoServer.repeat)
    server.flatMap(_.drain).run.run
  }

  def runClient() {
    val topic = async.topic[String]()
    val echoClient = (topic.subscribe |> clientLog("[Client] Inputing")).map { str =>
      tcp.write(ByteVector(str.getBytes) ++ Delimiter) ++ (readStr |> clientLog("[Client] Receiving"))
    }
    val client = tcp.connect(addr)(tcp.lift(echoClient))
    client.run.runAsync(println)
    io.stdInLines.to(topic.publish).run.run
  }
}

I run Foo.runServer() and Foo.runClient() on different terminal

And enter numbers 1 2 3 ... from the client console, but client receive no reply.

enter image description here enter image description here

What's wrong with my echo app ?

by jilen at March 28, 2015 01:18 PM

Calling showdown using Rhino

I want to use the showdownjs javascript markdown library using Rhino.

It is used like this:

var Showdown = require('showdown');
var converter = new Showdown.converter();

converter.makeHtml('#hello markdown!');

// <h1 id="hellomarkdown">hello, markdown</h1>

So I have the showdown.js file (https://raw.githubusercontent.com/showdownjs/showdown/master/compressed/Showdown.js), how would I go about calling this makeHTML method, while passing it a parameter that i have on the jvm side?

I found this code snippet online:

import org.mozilla.javascript.Scriptable
import org.mozilla.javascript.ScriptableObject
import org.mozilla.javascript.{Context, Function}
import java.io.InputStreamReader

class Showdown {

  def markdown(text: String): String = {
    // Initialize the Javascript environment
    val ctx = Context.enter
    try {
      val scope = ctx.initStandardObjects
      // Open the Showdown script and evaluate it in the Javascript
      // context.
      val showdownURL = getClass.getClassLoader.getResource("showdown.js")
      val stream = new InputStreamReader(showdownURL.openStream)
      ctx.evaluateReader(scope, stream, "showdown", 1, null)
      // Instantiate a new Showdown converter.
      val converterCtor = ctx.evaluateString(scope, "Showdown converter", "converter", 1, null).asInstanceOf[Function]
      val converter = converterCtor.construct(ctx, scope, null)
      // Get the function to call.
      val makeHTML = converter.get("makeHtml", converter).asInstanceOf[Function]

      val htmlBody = makeHTML.call(ctx, scope, converter, Array[AnyRef](text))

      htmlBody.toString
    }

    finally {
      Context.exit
    }
  }

}

When I use it like this:

val s = new Showdown()
s.markdown("hello")

I get an error:

org.mozilla.javascript.EvaluatorException: missing ; before statement (converter#1)
    at org.mozilla.javascript.DefaultErrorReporter.runtimeError(DefaultErrorReporter.java:109)
    at org.mozilla.javascript.DefaultErrorReporter.error(DefaultErrorReporter.java:96)
    at org.mozilla.javascript.Parser.addError(Parser.java:146)
    at org.mozilla.javascript.Parser.reportError(Parser.java:160)
    at org.mozilla.javascript.Parser.statementHelper(Parser.java:1266)
    at org.mozilla.javascript.Parser.statement(Parser.java:707)
    at org.mozilla.javascript.Parser.parse(Parser.java:401)
    at org.mozilla.javascript.Parser.parse(Parser.java:338)
    at org.mozilla.javascript.Context.compileImpl(Context.java:2368)
    at org.mozilla.javascript.Context.compileString(Context.java:1359)
    at org.mozilla.javascript.Context.compileString(Context.java:1348)
    at org.mozilla.javascript.Context.evaluateString(Context.java:1101)

I've never used Rhino before so I am not sure what the issue is.

Does this line in my method look correct?

val converterCtor = ctx.evaluateString(scope, "Showdown converter", "converter", 1, null).asInstanceOf[Function]

by Blankman at March 28, 2015 12:59 PM

Dave Winer

Silo-free is not enough

In a comment under my last piece, Drew Kime asks a question that needs asking.

"What is it about WordPress that you see as siloed?"

The answer might surprise you. Nothing. WordPress is silo-free.

  1. It has an open API. A couple of them.

  2. It's a good API. I know, because I designed the first one.

  3. WordPress is open source.

  4. Users can download their data.

  5. It supports RSS.

"So, if WordPress is silo-free, there must be something about MyWord that makes it worth doing, or you wouldn't be doing it," I can imagine Mr Kime asking, as a follow-up.

Silo-free is not enough

This question also came up in Phil Windley's latest MyWord post, where he introduced the editor with: "I can see you yawning. You're thinking 'Another blogging tool? Spare me!'" He went on to explain how MWE is radically silo-free.

But there's another reason I'm doing the MyWord blogging platform, which I explained in a comment.

MyWord Editor is going to be competitive in ease-of-use and power with the other blogging systems. The reason to use it won't be the unique architecture, for most people. It'll be that it's the best blogging system. This is something I know about, and I'm not happy with the way blogging tools have evolved.

The pull-quote: "I'm not happy with the way blogging tools have evolved."

Imagine if you took Medium, added features so it was a complete blogging system, and made it radically silo-free, and then add more features that amount to some of the evolution blogging would have made during its frozen period, the last ten years or so, and you'll have a pretty good idea of what I want to do with MWE.

Blogging is frozen

There haven't been new features in blogging in a long time. Where's the excitement? It looks to me like there's been no effort made to factor the user interface, to simplify and group functionality so the first-time user isn't confronted with the full feature set, left on his or her own to figure out where to go to create a new post or edit an existing one. Blogging platforms can be both easier and more powerful, I know because I've made blogging platforms that were.

Basically I got tired of waiting for today's blogging world to catch up to 2002. I figured out what was needed to win, and then set about doing it.

I have no ambition to start a company. I like to make software. I'm happy to keep going as we have been going. I think JavaScript is a wonderful platform, with apps running in the browser, and in Node.js on the server. It's a more fluid technology world today than it was in 2002 the last time I shipped a blogging platform.

I would also happily team up with companies who think this is a great opportunity. That blogging has stagnated too long, and that there must be ways to reinvigorate the market. I don't like taking shots at WordPress, but honestly I think it's stuck. I'm happy to talk with entrepreneurs who have ideas on how to create money-making businesses from an exciting user-oriented, user-empowering, radically silo-free open source blogging platform! Blogging needs a kick in the butt. I propose to give it one. With love. :kiss:

Anyway, to Drew, thanks for asking the question. I doubt if you were expecting this much of an answer, but the question needed asking, and I wanted to answer it.

PS: Here's an idea of what a MyWord post looks like.

March 28, 2015 12:58 PM

Fred Wilson

Video Of The Week: Talking VC With Mark Suster

Mark asked me to come on his TV show before leaving LA and I did that last week. It is an hour long broad ranging conversation about the venture capital and startup business.

by Fred Wilson at March 28, 2015 12:54 PM

CompsciOverflow

Example of worst case input for Build-Max-Heap

Is there a worst-case inputs for Build-Max-Heap?

I know there is but I just couldn't paint a clear picture of it in my head.

by iterence at March 28, 2015 12:42 PM

StackOverflow

How does Scala define a type of return value?

How does Scala define a type of return value?

def sqrt = (x: Int) =>
  if (x > 0 && x < 4)
    x * x

The return type for my code is Int => AnyVal. But if I change it to

def method = (x: String) =>
  if (x.equals("abc"))
    x.concat(x)

The return type would be String => Any. But why not AnyRef? String is an object, so it is better to use AnyRef. Am I wrong?

by barbara at March 28, 2015 12:31 PM

/r/netsec

StackOverflow

How do i open a repl in a different namespace [duplicate]

This question already has an answer here:

Specifically with a leiningen uberjar.

java -cp myapp.jar clojure.main -r

gets me a repl but defaults to the user namespace What do I need to do to get it to myapp's namespace?

java -cp myapp.jar clojure.main -e (in-ns myapp.core)

gives me clojure.lang.LispReader$ReaderException

* Update * The ultimate goal is to simply run

java -jar myapp.jar

and have a Clojure REPL in my app's namespace. Every solution I've seen involves writing code on the command line that I want to put into my main method but can't seem to get running

(defn -main [&args]
  (clojure.main/main "-e" "(in-ns myapp.core)"))

completes/terminates immediately

by KobbyPemson at March 28, 2015 12:20 PM

TheoryOverflow

number of edges and weights in shortest paths?

We know the bellman-ford algorithms check all edges in each step, and for each edge if,

d(v)>d(u)+w(u,v)

then d(v) being updated such that w(u,v) is the weight of edge (u, v) and d(u) is the length of best finding path for vertex u. if in one step we have no update for vertexes, the algorithms terminate. with supposing this algorithms, for finding all shortest path from vertex s in graph G with n vertex after k<n iterate is finished, can we conclude:

1) number of edges in all shortest paths from s is at most k-1

2) weight of all shortest paths from s is at most k-1

I Think Neither the number of edges nor their total weight is limited by k-1 with the defining problem, but my TA says (1) is True. How can describe these conditions?

by Mina Soli at March 28, 2015 12:07 PM

StackOverflow

How can I maintain a global information in a spark cluster?

It looks like information in Spark is rely on SparkContext.

If it stop(application is over, and call sc.stop()), all information about this application will disappear.

My question is, how can i maintain some information permanently(from Spark cluster start till cluster stop.)?

For example, I want to calculate MD5 of every application's jar file. I have tried to add a new class in spark source code to maintain this information. But each time a new application is submitted, this class will be initialized, so the information can not be reserved.

I also tried to add a HashMap in object org.apache.spark.depoly.master.Master(I thought this is alive in all cluster's lifetime), but even in this way, it will be initialized every time a new application is submitted.

So, how can I maintain a global information in Spark cluster? Create a new class(how and where)? or add a Map member(in which class or object)?

by zeromem at March 28, 2015 12:05 PM

Planet Theory

Research Days

Nisheeth Vishnoi, (whenever I see him, I am reminded that thoughts --- do and have to --- run ultra deep in research), is organizing this year's Research Day at EPFL, Lausanne, June 30, poster on the left. 

by metoo (noreply@blogger.com) at March 28, 2015 11:40 AM

CompsciOverflow

Relationship between Independent Set and Vertex Cover

Directly from Wikipedia, A set is independent if and only if its complement is a vertex cover.

Does this imply that the complement of the independent set problem complement is the vertex cover problem?

by Teodorico Levoff at March 28, 2015 11:30 AM

TheoryOverflow

What happens when google's url shortener runs out of urls? [on hold]

Google allows you to generate shorter links that will redirect users to the proper page. The generated url is 6 characters long, and has lower, upper, and numeric characters. This allows 62*61*60*59*58*57 combinations, or a little over 44 billion.

Pretend for a second that no one notices this happening; What happens as that fills up? Will generation fail as their servers struggle to find a unique value?

by Matthew G. at March 28, 2015 11:19 AM

StackOverflow

Type-safe primitives in Scala

I'd like to have type-safe "subclasses" of primitives in my Scala code without the performance penalty of boxing (for a very low-latency application). For example, something like this:

class Timestamp extends Long
class ProductId extends Long

def process(timestamp: Timestamp, productId: ProductId) {
  ...
}

val timestamp = 1: Timestamp // should not box
val productId = 1: ProductId // should not box

process(timestamp, productId) // should compile
process(productId, timestamp) // should NOT compile

There was a thread on the Scala User mailing list last year which seemed to conclude it wasn't possible without boxing, but I wonder if this is possible now in Scala 2.8.

by Mike at March 28, 2015 10:55 AM

What is the difference and issues between these two clojure functions?

For part of a class project I am implementing a function to read some data from a file and create a graph structure based on the file. Throughout the day I have asked a few questions and it has come down to this.

Below is a function that works as it should. It first reads in a file as a lazy sequence and then loops over the sequence parsing each line and printing it out.

(defn printGraph [filename, numnodes]
  (with-open [rdr (io/reader filename)]
    (let [lines (line-seq rdr)]
      (loop [curline (first lines)
             restlines (rest lines)]
        (println (lineToEdge curline))
        (cond (= 0 (count restlines)) curline
              :else
              (recur (first restlines)
                     (rest restlines)))))))

Here I use a function lineToEdge to parse a line in the file to an edge in a graph, the function is below

(defn lineToEdge [line]
  (cond (.startsWith line "e")
        (let [split-line (into [] (.split line " "))
              first-str (get split-line 1)
              second-str (get split-line 2)]
          [(dec (read-string first-str)) (dec (read-string second-str))])))

Using this function and others provided by the assignment I can tell that it works to parse the line into the proper format to add it to a graph

finalproject.core> (add-edge (empty-graph 10) (lineToEdge "e 2 10"))
[#{} #{9} #{} #{} #{} #{} #{} #{} #{} #{1}]

So from this I can tell that given a parsed line from lineToEdge I can add it to a graph as it is represented by the program.

Now my issue starts when I want to add the edges to the graph from the file. It seems when I add in the logic to the function to add the lines to the graph I get an error I just cannot track down or determine its cause. The function with this logic is seen below

(defn readGraph [filename, numnodes]
  (with-open [rdr (io/reader filename)]
    (let [lines (line-seq rdr)]
      (loop [graph (empty-graph numnodes)
             curline (first lines)
             restlines (rest lines)]
        (add-edge graph (lineToEdge curline))
        (cond (= 0 (count restlines)) graph
              :else
              (recur (graph)
                     (first restlines)
                     (rest restlines)))))))

Even apart from trying to add the edges to the graph if I simply allow graph (empty-graph numnodes) in the loop and recur with (graph) never changing it I still get the same error, which is given below

finalproject.core> (readGraphEdges "/home/eccomp/finalproject/resources/11nodes.txt" 11)
ArityException Wrong number of args (0) passed to: PersistentVector  clojure.lang.AFn.throwArity (AFn.java:429)

From here I am not sure where the error lies, I mean I can read the error and interpret it but it leads me now where. The Clojure stack trace leaves no clues for me either.

Can anyone identify where the issue lies?

by KDecker at March 28, 2015 10:24 AM

/r/clojure

StackOverflow

How to code Auto Mysql 5.5.5 Install for Freebsd shell Script

Hi everyone i'm trying to code auto mysql 5.5.x install script for freebsd i'm using this code

portsnap fetch extract
cd /usr/ports/databases/mysql55-client ; make install clean
cd /usr/ports/databases/mysql55-server ; make install clean
echo 'mysql_enable="YES"' >> /etc/rc.conf
/usr/local/etc/rc.d/mysql-server onestart
rehash
mysqladmin -uroot password "password"
/usr/local/etc/rc.d/mysql-server onerestart
mysql -p password
GRANT ALL PRIVILEGES ON *.* TO 'mt2'@'localhost' IDENTIFIED BY 'mt2!@#' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION;
flush privileges;
quit
/usr/local/etc/rc.d/mysql-server onerestart
reboot

i've got 2 problems first mysql -p password code is not typing password it asking password again.Second sometimes this screen appears and install stopping.What i have to choose in this screen "ok" or "cancel" and is there a way to pass this screen automaticly.

http://i.stack.imgur.com/J2AAf.png

by Aytug at March 28, 2015 09:55 AM

How to check mysql status on FreeBSD?

First, I will write that why I want to do it.

When I run a Ruby on Rails' database rake comamnd, it showed:

Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)

I created that file:

touch /tmp/mysql.sock

and run the command again. I showed:

Can't connect to local MySQL server through socket '/tmp/mysql.sock' (38)

I have search it by Google, most of them said that should check mysql's status.

I am using FreeBSD 9.1 now. From this article there is a good method to do that:

http://www.cyberciti.biz/faq/freebsd-start-stop-restart-mysql-server/

But unlucky, I can't find mysql-server in my /usr/local/etc/rc.d/ directory.

I want to know where my mysql is, so I run:

whereis mysql

It showed me this only:

mysql: /usr/local/bin/mysql

But when I try:

/usr/local/bin/mysql status

It showed:

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (38)

In other way, if I try to connect my mysql using this command:

mysql -utom -p1234 -h my_mysql_host_name

I can connect my local database.

I don't know the reason about /tmp/mysql.sock. It seems a bad issue.

by j-zhang at March 28, 2015 09:46 AM

How do you install nginx on CentOS 6.5 using ansible

I am new to using ansible and I am trying to set up a simple Hello world playbook. So far I have everything talking to each other but I can't seem to automate the nginx install. I have tried several variations and I can not seem to find any documentation for yum installing nginx with ansible.

My playbook looks like this: (Sorry for the formatting). It runs through the EPEL release install and seems to hang forever on the nginx install.

1 --- 2 - hosts: webserver 3 tasks: 4 ¦ - name: Install EPEL release for nginx 5 ¦ ¦ yum: name=epel-release state=present 6 7 ¦ - name: Install nginx web server 8 ¦ ¦ yum: name=nginx state=installed update_cache=true 9 ¦ ¦ notify: 10 ¦ ¦ ¦ - start nginx 11 12 ¦ - name: Upload the default index.html file 13 ¦ ¦ copy: src=html_files/index.html dest=/usr/share/nginx/www/ mode=0644 14 15 handlers: 16 ¦ - name: start nginx 17 ¦ ¦ service: name=nginx enables=yes state=started

Any help would be greatly appreciated.

If I change line 8 to : yum: name=http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm state=present it runs fine.

playbook output for the failing taks:

TASK: [Install nginx web server] ********************************************** <54.67.19.159> ESTABLISH CONNECTION FOR USER: root <54.67.19.159> REMOTE_MODULE yum name=nginx state=latest update_cache=true <54.67.19.159> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/username/.ansible/cp/ansible-ssh-%h-%p-%r" -o IdentityFile="/Users/username/.ssh/pemfile.pem" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 54.67.19.159 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1427534955.48-246337214944853 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1427534955.48-246337214944853 && echo $HOME/.ansible/tmp/ansible-tmp-1427534955.48-246337214944853' <54.67.19.159> PUT /var/folders/l0/5f3qkrxd1sn976dzb5sfkk640000gn/T/tmpczLCV7 TO /root/.ansible/tmp/ansible-tmp-1427534955.48-246337214944853/yum <54.67.19.159> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/username/.ansible/cp/ansible-ssh-%h-%p-%r" -o IdentityFile="/Users/username/.ssh/pemfile.pem" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=root -o ConnectTimeout=10 54.67.19.159 /bin/sh -c 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python -tt /root/.ansible/tmp/ansible-tmp-1427534955.48-246337214944853/yum; rm -rf /root/.ansible/tmp/ansible-tmp-1427534955.48-246337214944853/ >/dev/null 2>&1'

by seanm1985 at March 28, 2015 09:38 AM

QuantOverflow

What are good online resources for credit portfolio managers?

I am aware that this question is not the typical stackoverflow question, BUT I couldn`t find any site/forum/wiki, where credit portfolio managers hang out to share their experience and their methods. Therefore, I am kindly ask you in advance not to downvote or close this question, because I am sure that there is a lot of interest in such a "soft" question.

What are the good online resources for credit portfolio managers?

I appreciate your replies!

by Kare at March 28, 2015 09:37 AM

/r/compsci

QuantOverflow

Java Implied Volatility Solving with Newtons Method

Hi I am currently working on implementing my newtons method to guess implied volatility and I have the same code as you do. However, my vol result goes to infinity and I have not figured out why my method would lead me to such results.

 public double calcVol(int strike, double vol, double price, int side){
    double vol_new, vol_old;
    vol_new = vol;
    do{
        vol_old = vol_new; // store the old x value
        // calculate the new x value
        vol_new = vol_old - (theoValue(strike, vol_old)-price) / 
            calculateVega(strike, vol_old);
        }    
    }while(Math.abs(vol_old - vol_new) > tol);
    return vol_new;
}

For one iteration, we initialize vol to 0.3 and eventually the v_new goes to infinity and passes beyond the limit of the double data type in Java.

by Eric at March 28, 2015 09:33 AM

StackOverflow

Building OpenJDK - Corba issue

I'm trying to build OpenJDK 7 on WinXP x32 but getting a Corba error:

make[4]: Leaving directory '/cygdrive/c/Projects/OpenJDK/jdk7/corba/make/com/sun'
make[3]: Leaving directory '/cygdrive/c/Projects/OpenJDK/jdk7/corba/make/com'
abs_src_zip=`cd C:/Projects/OpenJDK/java_libs/java_libs/openjdk7/output_32/corba
/dist/lib ; pwd`/src.zip ; \
( cd ../src/share/classes ; find . \( -name \*-template \) -prune -o -type f -pr
int | zip -q $abs_src_zip -@ ) ; \
( cd C:/Projects/OpenJDK/java_libs/java_libs/openjdk7/output_32/corba/gensrc ; f
ind . -type f -print | zip -q $abs_src_zip -@ ) ;
File not found - *-template

zip error: Nothing to do! (/cygdrive/c/Projects/OpenJDK/java_libs/java_libs/open
jdk7/output_32/corba/dist/lib/src.zip)
FIND: Parameter format not correct

zip error: Nothing to do! (/cygdrive/c/Projects/OpenJDK/java_libs/java_libs/open
jdk7/output_32/corba/dist/lib/src.zip)
Makefile:147: recipe for target 'C:/Projects/OpenJDK/java_libs/java_libs/openjdk
7/output_32/corba/dist/lib/src.zip' failed
make[2]: *** [C:/Projects/OpenJDK/java_libs/java_libs/openjdk7/output_32/corba/d
ist/lib/src.zip] Error 12
make[2]: Leaving directory '/cygdrive/c/Projects/OpenJDK/jdk7/corba/make'
make/corba-rules.gmk:42: recipe for target 'corba-build' failed
make[1]: *** [corba-build] Error 2
make[1]: Leaving directory '/cygdrive/c/Projects/OpenJDK/jdk7'
Makefile:251: recipe for target 'build_product_image' failed
make: *** [build_product_image] Error 2

Looks like *-template files haven't been generated during Corba build (I didn't find any *-template file in Corba directory) and therefore src.zip archive hasn't been created.

What am I missing?

by Alex at March 28, 2015 09:27 AM

Why simple Scala tailrec loop for fibonacci calculation is faster in 3x times than Java loop?

Scala

code:

@annotation.tailrec
private def fastLoop(n: Int, a: Long = 0, b: Long = 1): Long = 
  if (n > 1) fastLoop(n - 1, b, a + b) else b

bytecode:

  private long fastLoop(int, long, long);
    Code:
       0: iload_1
       1: iconst_1
       2: if_icmple     21
       5: iload_1
       6: iconst_1
       7: isub
       8: lload         4
      10: lload_2
      11: lload         4
      13: ladd
      14: lstore        4
      16: lstore_2
      17: istore_1
      18: goto          0
      21: lload         4
      23: lreturn

result is 53879289.462 ± 6289454.961 ops/s:

https://travis-ci.org/plokhotnyuk/scala-vs-java/jobs/56117116#L2909

Java

code:

private long fastLoop(int n, long a, long b) {
    while (n > 1) {
        long c = a + b;
        a = b;
        b = c;
        n--;
    }
    return b;
}

bytecode:

  private long fastLoop(int, long, long);
    Code:
       0: iload_1
       1: iconst_1
       2: if_icmple     24
       5: lload_2
       6: lload         4
       8: ladd
       9: lstore        6
      11: lload         4
      13: lstore_2
      14: lload         6
      16: lstore        4
      18: iinc          1, -1
      21: goto          0
      24: lload         4
      26: lreturn

result is 17444340.812 ± 9508030.117 ops/s:

https://travis-ci.org/plokhotnyuk/scala-vs-java/jobs/56117116#L2881

Yes, it depends on environment parameters (JDK version, CPU model & memory frequency) and dynamic state. But why mostly the same bytecode on the same environment can produce stable 2x-3x difference for range of function arguments?

Here is list of ops/s numbers for different values of function arguments from my notebook with Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz (max 3.50GHz), RAM 12Gb DDR3-1333, Ubuntu 14.10, Oracle JDK 1.8.0_40-b25 64-bit:

[info] Benchmark            (n)   Mode  Cnt          Score          Error  Units
[info] JavaFibonacci.loop     2  thrpt    5  171776163.027 ±  4620419.353  ops/s
[info] JavaFibonacci.loop     4  thrpt    5  144793748.362 ± 25506649.671  ops/s
[info] JavaFibonacci.loop     8  thrpt    5   67271848.598 ± 15133193.309  ops/s
[info] JavaFibonacci.loop    16  thrpt    5   54552795.336 ± 17398924.190  ops/s
[info] JavaFibonacci.loop    32  thrpt    5   41156886.101 ± 12905023.289  ops/s
[info] JavaFibonacci.loop    64  thrpt    5   24407771.671 ±  4614357.030  ops/s
[info] ScalaFibonacci.loop    2  thrpt    5  148926292.076 ± 23673126.125  ops/s
[info] ScalaFibonacci.loop    4  thrpt    5  139184195.527 ± 30616384.925  ops/s
[info] ScalaFibonacci.loop    8  thrpt    5  109050091.514 ± 23506756.224  ops/s
[info] ScalaFibonacci.loop   16  thrpt    5   81290743.288 ±  5214733.740  ops/s
[info] ScalaFibonacci.loop   32  thrpt    5   38937420.431 ±  8324732.107  ops/s
[info] ScalaFibonacci.loop   64  thrpt    5   22641295.988 ±  5961435.507  ops/s

by Andriy Plokhotnyuk at March 28, 2015 09:18 AM

Handling multiple TCP connections with Akka Actors

I'm trying to set up a simple TCP server using akka actors that should allow multiple clients to be connected simultaneously. I reduced my problem to the following simple program:

package actorfail
import akka.actor._, akka.io._, akka.util._
import scala.collection.mutable._
import java.net._

case class Foo()

class ConnHandler(conn: ActorRef) extends Actor {
  def receive = {
    case Foo() => conn ! Tcp.Write(ByteString("foo\n"))
  }
}

class Server(conns: ArrayBuffer[ActorRef]) extends Actor {
  import context.system
  println("Listing on 127.0.0.1:9191")
  IO(Tcp) ! Tcp.Bind(self, new InetSocketAddress("127.0.0.1", 9191))
  def receive = {
    case Tcp.Connected(remote, local) =>
      val handler = context.actorOf(Props(new ConnHandler(sender)))
      sender ! Tcp.Register(handler)
      conns.append(handler)
  }
}

object Main {
  def main(args: Array[String]) {
    implicit val system = ActorSystem("Test")
    val conns = new ArrayBuffer[ActorRef]()
    val server = system.actorOf(Props(new Server(conns)))
    while (true)  {
      println(s"Sending some foos")
      for (c <- conns) c ! Foo()
      Thread.sleep(1000)
    }
  }
}

It binds to localhost:9191 and accepts multiple connections, adding the connection handlers to a global array and periodically sending the string "foo" to each connection. Now when I try to connect with more than one client simultaneously, only the first one gets the "foo"s. When I open a second connection, it doesn't get sent any foo's, rather I get the following type of log message:

Sending some foos
[INFO] [03/27/2015 21:24:07.331] [Test-akka.actor.default-dispatcher-6] [akka://Test/deadLetters] Message [akka.io.Tcp$Write] from Actor[akka://Test/user/$a/$b#-308726290] to Actor[akka://Test/deadLetters] was not delivered. [7] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.

I understand that this would mean that the target actor to which we try to send the Tcp.Write command is no longer accepting messages. But why is that? Can you help me understand the underlying issue? How can I make this work?

by Niklas B. at March 28, 2015 09:13 AM

How to run Ansible without specifying the inventory but the host directly?

I want to run Ansible in Python without specifying the inventory file through (ANSIBLE_HOST) but just by:

ansible.run.Runner(
  module_name='ping',
  host='www.google.com'
)

I can actually do this in fabric easily but just wonder how to do this in Python. On the other hand, documentation of the Ansible API for python is not really complete.

by Ngoc Tran at March 28, 2015 09:07 AM

CompsciOverflow

Placing data on the bus for a synchronous read operation

This question is taken from Computer Organization and architecture, William Stalling:

For a synchronous read operation (Figure 3.18), the memory module must place the data on the bus sufficiently ahead of the falling edge of the Read signal to allow for signal settling. Assume a microprocessor bus is clocked at 20 MHZ and the Read signal begins to fall in the middle of T3.

a. Determine the length of the memory read instruction cycle.

b. When, at the latest, should memory data be placed on the bus? Allow 10 ns for the settling of data.

enter image description here

The model answer that my instructor provided us for part b is: "The Read signal begins to fall at 75 ns from the beginning of the third clock cycle (middle of the second half of T3). The memory must place the data on the bus at: 125 ns – 10 ns = 115 ns."

However, I don't get how the read signal will begin to fall after 75 ns from the beginning of T3, shouldn't be 50/2 = 25 ns from the start of T3? then he computed 115 ns which I think it's from the start of T2 not T3.

Also why in part a he multiplied 1 cycle with 3 to find the memory read instruction cycle? Is it because the read signal occupies 2 cycles and the valid data occupies 1 cycles that's equivalent to 300 ns?

by AMH9 at March 28, 2015 08:56 AM

StackOverflow

ERROR: local_action is not a legal parameter at this level in an Ansible Playbook

Trying to use ansible to spin up instances on linode. I've pip installed linode-python according to http://docs.ansible.com/linode_module.html

I've also made extra adjustments per http://softwareas.com/ansible-and-linode-what-i-learned-about-controlling-linodes-from-ansible/

The command line:

ansible localhost -m linode  -a "api_key=xxx name=test plan=1 distribution=124 datacenter=3 password=xxx state=present"

works. Why does this playbook not work?

---
- local_action:
     module: linode
     api_key: 'xxx'
     name: quickpic
     plan: 1
     datacenter: 3
     distribution: 124
     password: 'xxx'
     wait: yes
     wait_timeout: 600
     state: present

$ ansible-playbook test.yml
ERROR: local_action is not a legal parameter at this level in an Ansible Playbook

by JaseC at March 28, 2015 08:50 AM

CompsciOverflow

What is the name for this search algorithm?

A colleague came up with the following search algorithm:

  • We have an array of sorted, distinct integers [$i_0$ < $i_2$ < ... < $i_n$]
  • We are looking for the index of $i_k$ (for simplicity suppose that $i_k$ is in the array)
  • We suppose that the elements are pretty uniformly distributed between $i_1$ and $i_n$

The algorithm:

  1. let "position" be initially be 0
  2. if $i_k$ == $i_{position}$, stop
  3. otherwise:
    • calculate distance = $i_k$ - $i_{position}$
    • if we have just switched direction (eg. the previous distance was positive and this one is negative or vice-versa), make sure that abs(distance) < abs(previous distance) by adding/subtracting 1 from distance such that it gets closer to 0. Note: if distance is already is 1 or -1, it isn't adjusted.
    • let position be position + distance (see note below for possible optimization)
    • if position is positive, let position be min(position, n) (ie. don't overshoot the end)
    • if position is negative, let position be max(position, 0) (ie. don't undershoot the beginning)

I would be interested if somebody studied this algorithm and if so, what's its name? The best I could come up with is that it's a variant of "interpolation search", however I expect the above algorithm to be faster on x86 hardware because it only uses addition/subtraction which is faster than the division/multiplication used by "standard" interpolation search.

Note:

Instead of updating position by adding the distance (ie. position = position + distance) we can use the "average distance between consecutive elements" as a scaling factor: $$position \leftarrow position + distance / {i_n-i_0 \over n+1}$$

This should give a more precise estimate about the element position.

A disadvantage of the above is that it involves a division which is slow(er) on x86/x64 platforms. Perhaps we can get away with finding the closes power of two to the scaling factor (average distance between consecutive elements) and use shift to perform the division (ie. if $2^t$ is the power of two closes to the scaling factor we can do $position \leftarrow position + distance >> k$.

Update:

  • as people have pointed out the initial element needs to be $i_0$
  • added steps to avoid oscillating indefinitely between undershooting / overshooting the target
  • discovered the "galloping search" from Timsort which seems related but not the same

by Cd-MaN at March 28, 2015 08:42 AM

TheoryOverflow

arbitrary segment stabbing query for 2d segments

Store a set of 2d segments S in some data structure. For an arbitrary query 2d segment q, answer a yes/no question in sublinear time: whether q intersect with any segment in S?

If the query is a vertical or horizontal segment, it can be solved by 2D segment tree in O(logn).The preprocessing is to sort the x-coordinates or y-coordinates of end points. But how about an arbitrary segment query?

by Yan Zhu at March 28, 2015 08:39 AM

Halfbakery

StackOverflow

Using mapTo with futures in Akka/Scala

I've recently started coding with Akka/Scala, and I've run into the following problem:

With a implicit conversion in scope, such as:

implicit def convertTypeAtoTypeX(a: TypeA): TypeX =
    TypeX() // just some kinda conversion

This works:

returnsAFuture.mapTo[TypeX].map { x => ... }

But this doesn't:

returnsAFuture.mapTo[TypeX].onComplete { ... }

The latter fails with a type cast exception. (i.e. TypeA cannot be cast to TypeX)

Very confused. Why? I suspect it has something to do with Try, but I don't know enough about either to guess at any sort of answer :(

Thanks!

by qqqqq at March 28, 2015 07:56 AM

/r/clojure

CompsciOverflow

Strongest Statement to made about complement and dec.

Let's say I have a decision problem, D and it's complement D'. I know D is poly-time reducible to D'(it's complement). Furthermore, I know D is Np-Complete. What is the most strongest statement I could possibly make about this kind of relationship?

by Teodorico Levoff at March 28, 2015 07:47 AM

StackOverflow

deconstruct a compound tuple in Scala

I have a function that returns a tuple of which one item is also a tuple.

def foo: (Any, (Any, Any))

The actual types are not really Any but this is a simplification of the actual code (hence I dub this a compound tuple for the sake of this question).

Now I deconstruct this tuple like follows, wishing to proceed computation with a1, a2, a3.

val (a1, bar) = foo
val (a2, a3) = bar

Is there a one liner for this?

by matt at March 28, 2015 07:40 AM

/r/emacs

TRAMP tries to re-send the password after successfully sending it

I really like TRAMP, either to edit files with Emacs as root or to edit any file that exists remotely.

But I'm experiencing one issue: TRAMP tries to re-send the password after it is introduced correctly.

For instance, if I try to edit any file in /etc. TRAMP will ask for my password, after inserting it, the file is visited and I start editing the file. But after 5 seconds, TRAMP tries to re-send the password and/or ask for the password again.

I don't have any fancy customization for TRAMP so I don't know what's causing this problem...

submitted by shackra
[link] [2 comments]

March 28, 2015 07:24 AM

/r/compsci

/r/netsec

Lobsters

CompsciOverflow

Understanding terms related to 2 SAT algorithm

Recently I am learning about solution of 2-Satiability problem using SCC(Strongly Connected Component). There is a theorem related to this problem given below:

Theorem 2: The formula F is true if and only if none of the following three conditions holds: 3(i) An existential vertex u is in the same strong component as its complement u’ . 3(ii) A universal vertex ui is in the same strong component as an existential vertex uj such that j < i (i.e., xi is not quantified within the scope of Qi). 3(iii) There is a path from a universal vertex u to another universal vertex v. (This condition includes the case that v = U.)

But i am not familiar with the terms called "Universal vertex" & "Existential vertex". Can any one please explain these terms to me? Thanks in advance.

by Mostafizur at March 28, 2015 06:32 AM

StackOverflow

SPA template using Spray

I'm looking for a SPA template using Spray and Slick on the backend and Angular on the frontend. Something like https://github.com/softwaremill/bootzooka would be perfect, but for Spray. I will probably go with bootzooka unless I can find an equivalent, but so far I haven't found anything of the same quality.

Thanks

by BrendanMcKee at March 28, 2015 06:22 AM

deploy spray with akka on tomcat 7 , cannot get response from RestAPI

I am trying to deploy my app on tomcat. I am getting to my index.html but my API respond

404 not found

And I can't figure out what am I doing wrong

this is my service actor

class DemoRoute extends Actor with DemoRouteService {
  implicit def actorRefFactory: ActorContext = context
  def receive = runRoute(route)
}

trait DemoRouteService extends HttpService{

  val route = {
    import com.tr.em.domain.JsonImplicits._
     path("foo"/"status"){
          get{
            complete("I feel good, thanks for checking")
          }
        }
    }
}

This my web.xml

<?xml version="1.0"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
         version="3.0">
    <listener>
        <listener-class>spray.servlet.Initializer</listener-class>
    </listener>

    <servlet>
        <servlet-name>SprayConnectorServlet</servlet-name>
        <servlet-class>spray.servlet.Servlet30ConnectorServlet</servlet-class>
        <async-supported>true</async-supported>
    </servlet>

    <servlet-mapping>
        <servlet-name>SprayConnectorServlet</servlet-name>
        <url-pattern>/*</url-pattern>
    </servlet-mapping>

</web-app>

this is my application.conf

akka {
  loglevel = INFO
  event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
}

spray.servlet {
  boot-class = "com.tr.em.SprayBoot"
  request-timeout = 10s
}

this is my boot class

import spray.servlet.WebBoot
import akka.actor.ActorSystem
import akka.actor.Props

class SprayBoot extends WebBoot {
  val system = ActorSystem("systemactor")
  val serviceActor = system.actorOf(Props[DemoRoute])
  system.registerOnTermination {
    system.log.info("Application shut down")
  }
}

by igx at March 28, 2015 06:03 AM

Fefe

Das parlamentarische Geheimdienst-Kontrollgremium hat ...

Das parlamentarische Geheimdienst-Kontrollgremium hat von Eikonal übrigens aus der Presse erfahren. Denn parlamentarische Geheimdienstkontrolle funktioniert ähnlich zuverlässig wie freiwillige Selbstkontrolle gegen Umweltverschmutzungen hilft.

March 28, 2015 06:01 AM

/r/compsci

Software Test Engineer

Hey guys, I'm new to this sub, but I just accepted a job as a Software Test Engineer. Do any of you guys know where I can teach myself a little bit before I go into training? Any help would be appreciated. Thanks guys!

submitted by burnz248
[link] [1 comment]

March 28, 2015 05:29 AM

Halfbakery

StackOverflow

Strengths of Clojure testing frameworks?

Which one do you prefer and why? What are the pros and cons of each? In which scenario does each outshine the others?

I'm particularly interested in midje vs. clojure.test, but feel free to bring up other Clojure testing frameworks too.

See also Best unit testing framework for Clojure? (the answers to that question didn't provide much detail on the "why").

by bshanks at March 28, 2015 04:56 AM

What autotest tools exist for Clojure

I was wondering what autotest tools exists for clojure. In Ruby I have ZenTest, redgreeen etc to continuously keep on testing my code. I would like to have something similar for Clojure

So far I have found this simple script https://github.com/devn/clojure-autotest on the github. A bit crude for my taste. All tests run when a change occurrs. Also it may blurt out a long stacktrace in case of syntax errors, obscuring what goes wrong.

by zetafish at March 28, 2015 04:42 AM

3? ways in scala to return a function from a function - 1 doesn't compile - don't understand why

I'm learning Scala. I have a Scala function which can return another function. I've come across 3 ways to do this in Scala (there may be more). In this particular code the 3rd option doesn't seem to compile but I've seen this technique used elsewhere but can't work out why it isn't working in this case.

In this fairly contrived example I have a function that takes an Int and returns a function that maps an Int to Boolean.

  def intMapper1(elem: Int): Int => Boolean = {
    def mapper(x: Int): Boolean =
      x == elem
    mapper
  }

  def intMapper2(elem: Int): Int => Boolean = (x: Int) => x == elem

  def intMapper3(elem: Int)(x: Int) = x == elem

  val m1 = intMapper1(2)
  val m2 = intMapper2(4)
  val m3 = intMapper3(6)

I get the compile error: Error:(35, 22) missing arguments for method intMapper3 in object FunSets; follow this method with `_' if you want to treat it as a partially applied function val m3 = intMapper3(6) ^

by Martin Bayly at March 28, 2015 04:39 AM

/r/dependent_types

StackOverflow

xml/parse: how to convert the string in file into lower case before parse?

To parse a xml file in clojure we could use

 (clojure.xml/parse file)

But the xml has both upper case and lower case, and I'd like to convert all the strings into lower case before parse. One solution is to create temp file based on the file, and that temp will have the lower case string. But is there any better solutions?

by Daniel Wu at March 28, 2015 04:11 AM

Lobsters

Erlang Factory SF 2015 Videos Online

Mindblowing collection of incredible talks, any one of which would be worth making the trip, posted even before the conference ended. Outstanding job, Erlang Solutions!

Comments

by felixgallo at March 28, 2015 03:37 AM

StackOverflow

What is the difference between this `doseq` statement and a `for` statement; reading file in clojure?

If you have been following my questions over the day,

I am doing a class project in clojure and having difficulty reading a file, parsing it, and creating a graph from its content. I have managed to open and read a file along with parsing the lines as needed. The issue I face now is creating a graph structure from the data that was read in.

Some background first. In other functions I have implemented in this project I have used a for statement to "build up" a list of values as such

...
(let [rem-list (remove nil? (for [j (range (count (graph n)))]
    (cond (< (rand) 0.5)
        [n (nth (seq (graph n)) j)])))
...

This for would build up a list of edges to remove from a graph, after it was done I could then use rem-list in a reduce to remove all of the edges from some graph structure.

Back to my issue. I figured that if I were to read a file line by line I could "build up" a list in the same manner so I implemented the function below

(defn readGraphFile [filename, numnodes]
  (let [edge-list 
        (with-open [rdr (io/reader filename)]
          (doseq [line (line-seq rdr)]
           (lineToEdge line)))]
    (edge-list)))

Though if I am to run this function I end up with a null pointer exception as if nothing was ever "added" to edge-list. So being the lazy/good? programmer I am I quickly thought of another way. Though it still somewhat relies on my thinking of how the for built the list.

In this function I first let [graph be equal to an empty graph with the known number of nodes. Then each time that a line was read I would simply add that edge (each line in the file is an edge) to the graph, in effect "building up" my graph. The function is shown below

(defn readGraph [filename, numnodes]
  (let [graph (empty-graph numnodes)]
    (with-open [rdr (io/reader filename)]
      (doseq [line (line-seq rdr)]
        (add-edge graph (lineToEdge line))))
    graph))

Here lineToEdge returns a pair of numbers (ex [1 2]). Which is proper input for the add-edge function.

finalproject.core> (add-edge (empty-graph 5) (lineToEdge "e 1 2"))
[#{} #{2} #{1} #{} #{}]

The issue with this function though is that it seems to never actually add an edge to a graph

finalproject.core> (readGraph "/home/eccomp/finalproject/resources/11nodes.txt" 11)
[#{} #{} #{} #{} #{} #{} #{} #{} #{} #{} #{}]

So I guess my issue lies with how doseq is different from for? Is it different or is my implementation incorrect?

by KDecker at March 28, 2015 03:32 AM

QuantOverflow

Excel to Java for Interactive brokers

I have a working excel workbook connected to Interactive brokers DDE API. I am struggling to upgrade to a more robust environment like Java. I tried to change it to ActiveX for Excel but the refreshing rate is limited and IB does not support microsoft RTD server. Can anybody suggest the easiest way to migrate my DDE workbook to Java. Since IB provides a java sample which provide the basic connection between TWS. Is it possible to feed the quote from Java API into the excel for calculation?

by Marcus at March 28, 2015 03:12 AM

Value a structured note with Black-Scholes

Apologies in advance if this seems like a straight forward question but I'm really unsure how to go about it. Say I have the payoff for a structured note benchmarked against an index and I have a figured out a combination of two different options will essentially provide the same payoff. When I use the Black-Scholes-Merton model to value the options, the value that I get is significantly lower than the par value of the note. e.g. par value is 1000 and the options are at 200 in total. Is that possible? What is the general approach when it comes to calculating the value of a structure note?

Thanks!

by PLui at March 28, 2015 03:02 AM

StackOverflow

Add data type from PostgreSQL extension in Slick

I'm using the PostGIS extension for PostgreSQL and I'm trying to retrieve a PGgeometry object from a table.

This version is working fine :

import java.sql.DriverManager
import java.sql.Connection
import org.postgis.PGgeometry

object PostgersqlTest extends App {
  val driver = "org.postgresql.Driver"
  val url = "jdbc:postgresql://localhost:5432/gis"

  var connection:Connection = null

  try {
    Class.forName(driver)
    connection = DriverManager.getConnection(url)

    val statement = connection.createStatement()
    val resultSet = statement.executeQuery("SELECT geom FROM table;")

    while ( resultSet.next() ) {
      val geom = resultSet.getObject("geom").asInstanceOf[PGgeometry]
      println(geom)
    }
  } catch {
    case e: Exception => e.printStackTrace()
  }
  connection.close()
}

I need to be able to do the same thing using Slick custom query. But this version doesn't work :

Q.queryNA[PGgeometry]("SELECT geom FROM table;")

and gives me this compilation error

Error:(50, 40) could not find implicit value for parameter rconv: scala.slick.jdbc.GetResult[org.postgis.PGgeometry]
  val query = Q.queryNA[PGgeometry](
                                   ^

Is there a simple way to add the PGgeometry data type in Slick without having to convert the returned object to a String and parse it?

by synapski at March 28, 2015 02:28 AM

/r/emacs

QuantOverflow

Stock Returns Distribution in Heston Model

There is a paper by Dragulescu and Yakovenko (DY) in 2002 proposing a pdf for the stock returns in the Heston model. However, in a paper by Daniel, Bree and Joseph, they actually perform statistical tests on DY's pdf and show it is not really any better than a log normal pdf.

Is anyone aware of more recent attempts at a closed-form solution for the distribution of returns under the Heston model?

by bcf at March 28, 2015 02:12 AM

Planet Clojure

LispCast Intro to clojure.test

LispCast Intro to clojure.test will launch this week.

Read full post

by LispCast at March 28, 2015 01:36 AM

StackOverflow

Running spark on top of Slurm

How can I run Spark on top of Slurm cluster? I am much interested to define the SparkContex inside my program an set how many node I want to use, but if I have to write some bash scripts for it, that would be also okay.

by Omid at March 28, 2015 01:22 AM

Maps and keywords in clojure

So I do my query on my posts for their eid and I get a list:

(map :eid (get-all-posts))

(17592186045421 17592186045438 17592186045440 17592186045540 
 17592186045545 17592186045550 17592186045588 17592186045590 
 17592186045592 17592186045594 17592186045597 17592186045608 
 17592186045616 17592186045721 17592186045866 17592186046045 
 17592186046047 17592186046075 17592186046077 17592186046079 
 17592186046081 17592186046083 17592186046085 17592186046088 
 17592186046149 17592186046158 17592186046170 17592186046174 
 17592186046292 17592186046352 17592186046362 17592186047146 
 17592186047211)

Every post can have many :ratings,

(get-all-ratings 17592186047211)

({:rating "positive", 
  :rid 17592186047652, 
  :email "vaso@ph"} 
 {:rating "positive", 
  :rid 17592186047756, 
  :email "sova.kua@gmcom"} 
 {:rating "meh", 
  :rid 17592186047725, 
 :email "badger@ton"}) 

Would like to combine the maps so I can `(count) the number of ratings per post.

(count (get-all-ratings 17592186047211)

 3

But trying something like

(count (map get-all-ratings (map :eid (get-all-posts)))

simply returns the number of posts... it's a big list like

(({:rating "plus", :rid 7175 ..}
  {:rating "plus" :rid 7374})
 ({:rating "whatever", :rid 7535})
 ()
 () <some have no ratings
 ({:rating "green", :rid 7152}))

How can I tally up the number of times :rating shows up in a list of listed maps like above?

>(map :rating (map first (map get-all-ratings (map :eid (get-all-posts)))))

is something I've been playing with, and it returns a list, but as you can probably tell, it just returns one :rating field from each post's expanded ratings map.

Edit: I noticed just now that something like

({:rating 5}
 {:rating 3}
 {:rating 7})

just becomes

 {:rating 7} 

...disregarding other entries in the list. How come?

by sova at March 28, 2015 01:22 AM

clojure macro not working properly with doseq

Update: Thank you everyone for your responses, but I seem to have made a bad choice using an embedded "def" in my example, which is throwing people off. This has nothing to do with def. The problem still occurs if I do not use a def. As to why I'm doing it this way -- honestly, I'm just trying to learn about macros, and this is just one of the ways that occurred to me. I'm just trying to understand how macros work. I may very well ultimately end up using a different mechanism. I also know that having multiple defs (including defmacros) for the same thing is considered bad practice, but it still seems to me this way should work.

I am re-factoring my examples:

When I write variations of a macro-generating macro in-line (with a simplified version of what I'm actually doing):

(do 
    (defmacro abc []
      `(defmacro xyz []
         ;;(def x 7)))
         (+ 7 1)))
    (abc)
    ;;(xyz)
    ;;(spit "log.txt" (format "pass 1: x=%s\n" x ) :append false))
    (spit "log.txt" (format "pass 1: results=%s\n" (xyz) ) :append false))

(do 
    (defmacro abc []
      `(defmacro xyz []
         ;;(def x 8)))
         (+ 8 1)))
    (abc)
    ;;(xyz)
    ;;(spit "log.txt" (format "pass 2: x=%s\n" x ) :append true))
    (spit "log.txt" (format "pass 1: results=%s\n" (xyz) ) :append false))

(do 
    (defmacro abc []
      `(defmacro xyz []
         ;;(def x 9)))
         (+ 9 1)))
    (abc)
    ;;(xyz)
    ;;(spit "log.txt" (format "pass 3: x=%s\n" x ) :append true))
    (spit "log.txt" (format "pass 1: results=%s\n" (xyz) ) :append false))

It gives me what I expect:

pre-refactor:
cat log.txt 
pass 1: x=7
pass 2: x=8
pass 3: x=9

post-refactor:
cat log.txt 
pass 1: results=8
pass 2: result=9
pass 3: result=10

But when I try to iterate using doseq, it only seems to give me one value:

(def int-lookup [7 8 9])

  (doseq [i (range 3)]
    (defmacro abc []
      `(defmacro xyz []
         ;;(def x ~(int-lookup i))))
         (+ 1 ~(int-lookup i))))
      (abc)
      ;;(xyz)
      ;;(spit "log.txt" (format "pass %s: x=%s\n" i x) :append (if (= i 0) false true)))
      (spit "log.txt" (format "pass %s: result=%s\n" i (xyz)) :append (if (= i 0) false true))

Output:

pre-refactor:
cat log.txt 
pass 0: x=9
pass 1: x=9
pass 2: x=9

post-refactor
cat log.txt 
pass 0: result=10
pass 1: result=10
pass 2: result=10

I've seen it give me all 7's, and all 8's too, but never mixed.

I've tried resetting the macro symbols in-between like so:

(ns-unmap *ns* 'xyz)
(ns-unmap *ns* 'x)

However, this make things even worse, sporadically generating:

CompilerException java.lang.RuntimeException: Unable to resolve symbol: xyz in this context, compiling:(/tmp/form-init2424586203535482807.clj:5:5) 

I'm sort of assuming the compiler is somehow optimizing the macro def or call, so it's only actually driving it once when using doseq. If this is the case, then how would you iterate over defmacro definitions and not have this happen? I intend to have about 15 iteration is my final solution, so I really don't want to have to in-line all definitions.

by vt5491 at March 28, 2015 01:10 AM

/r/netsec

StackOverflow

Scala - designing DSL with mininal syntax

I'm looking to design a DSL in Scala that has the least amount of syntax cruft possible. It's meant to be used by users who don't know Scala, but can take advantage of Scala type system for validation and error checking. In my head the DSL looks like this:

outer {
    inner(id = "asdf") {
        value("v1")
        value("v2")
    }
}

This snipped should produce a value like this:

Outer(Inner("asdf", Value("v1") :: Value("v2") :: Nil))

Given data structures

case class Outer(inner: Inner)
case class Inner(values: List[Value])
case class Value(value: String)

The idea is that inner function is only available in the closure following outer, value function is only available withing the closure after inner, etc. That is the following won't compile: outer { value("1") }.

How can I implement something like this? In the end the data structures don't need to be immutable, it can be anything as long as it's strongly typed.

I have no familiarity with Scala macros, but can I solve this problem with macros by any chance?


The closest I have so far is the following implementation:

object DSL extends App {

    def outer = new Outer()

    class Outer(val values: mutable.MutableList[Inner] = mutable.MutableList.empty) {
        def inner(id: String): Inner = {
            val inner = new Inner(id)
            values += inner
            inner
        }
        def apply(func: Outer => Unit): Outer = {
            func(this)
            this
        }
        override def toString: String = s"Outer [${values.mkString(", ")}]"
    }

    class Inner(val id: String, val values: mutable.MutableList[Value] = mutable.MutableList.empty) {
        def value(v: String): Value = {
            val value = new Value(v)
            values += value
            value
        }
        def apply(func: Inner => Unit): Unit = func(this)

        override def toString: String = s"Inner (${values.mkString(", ")})"
    }

    class Value(val str: String) {
        override def toString: String = s"Value<$str>"
    }

    val value = outer { o =>
        o.inner(id = "some_id") { i =>
            i.value("value1")
            i.value("value2")
        }
    }

    println(value)

How can I get rid of the anonymous function annotations (i.e. o => and o., etc.)?

Alternatively is there a way to treat outer as new Outer (in this case the following code block will be treated as constructor and I will be able to call member functions)?

by ak. at March 28, 2015 12:59 AM

CompsciOverflow

kth nearest vertex in a unweighted graph

Given an unweighted undirected graph $G$ with $10^5$ vertices and a subset $S$ of special vertices and an integer $k$, I want to find the $k$th nearest special vertex for each vertex. What algorithm can I use for this problem?

I'm actually thinking of algorithm for finding shortest paths from every vertex to all other vertices (like Floyd-Warshall algo, but in our case graph is unweighted and we need much better performance).

by milos at March 28, 2015 12:37 AM

UnixOverflow

Status of 5GHz support for 802.11 WiFi devs in BSD (2014)

To put it simply, 2.4GHz band doesn't work anymore due to the plethora of various Wi-Fi and other devices, whereas 5GHz is still almost entirely empty.

Do any of the BSDs -- FreeBSD, OpenBSD, NetBSD, DragonFly -- support wireless networking in the 5GHz band?

Any one of 802.11a, 802.11n, 802.11ac, as long as it's on the 5GHz band?

by cnst at March 28, 2015 12:36 AM

Jeff Atwood

Toward a Better Markdown Tutorial

It's always surprised me when people, especially technical people, say they don't know Markdown. Do you not use GitHub? Stack Overflow? Reddit?

I get that an average person may not understand how Markdown is based on simple old-school plaintext ASCII typing conventions. Like when you're *really* excited about something, you naturally put asterisks around it, and Markdown makes that automagically italic.

But how can we expect them to know that, if they grew up with wizzy-wig editors where the only way to make italic is to click a toolbar button, like an animal?

I am not advocating for WYSIWYG here. While there's certainly more than one way to make italic, I personally don't like invisible formatting tags and I find that WYSIWYG is more like WYCSYCG in practice. It's dangerous to be dependent on these invisible formatting codes you can't control. And they're especially bad if you ever plan to care about differences, revisions, and edit history. That's why I like to teach people simple, visible formatting codes.

We can certainly debate which markup language is superior, but in Discourse we tried to build a rainbow tool that satisifies everyone. We support:

  • HTML (safe subset)
  • BBCode (basic subset)
  • Markdown (full)

This makes coding our editor kind of hellishly complex, but it means that for you, the user, whatever markup language you're used to will probably "just work" on any Discourse site you happen to encounter in the future. But BBCode and HTML are supported mostly as bridges. What we view as our primary markup format, and what we want people to learn to use, is Markdown.

However, one thing I have really struggled with is that there isn't any single great place to refer people to with a simple walkthrough and explanation of Markdown.

When we built Stack Overflow circa 2008-2009, I put together my best effort at the time which became the "editing help" page:

It's just OK. And GitHub has their Markdown Basics, and GitHub Flavored Markdown help pages. They're OK.

The Ghost editor I am typing this in has an OK Markdown help page too.

But none of these are great.

What we really need is a great Markdown tutorial and reference page, one that we can refer anyone to, anywhere in the world, from someone who barely touches computers to the hardest of hard-core coders. I don't want to build another one for these kinds of help pages for Discourse, I want to build one for everyone. Since it is for everyone, I want to involve everyone. And by everyone, I mean you.

After writing about Our Programs Are Fun To Use – which I just updated with a bunch of great examples contributed in the comments, so go check that out even if you read it already – I am inspired by the idea that we can make a fun, interactive Markdown tutorial together.

So here's what I propose: a small contest to build an interactive Markdown tutorial and reference, which we will eventually host at the home page of commonmark.org, and can be freely mirrored anywhere in the world.

Some ground rules:

  • It should be primarily in JavaScript and HTML. Ideally entirely so. If you need to use a server-side scripting language, that's fine, but try to keep it simple, and make sure it's something that is reasonable to deploy on a generic Linux server anywhere.

  • You can pick any approach you want, but it should be highly interactive, and I suggest that you at minimum provide two tracks:

    • A gentle, interactive tutorial for absolute beginners who are asking "what the heck does Markdown even mean?"

    • A dynamic, interactive reference for intermediates and experts who are asking more advanced usage questions, like "how do I make code inside a list, or a list inside a list?"

  • There's a lot of variance in Markdown implementations, so teach the most common parts of Markdown, and cover the optional / less common variations either in the advanced reference areas or in extra bonus sections. People do love their tables and footnotes! We recommend using a CommonMark compatible implementation, but it is not a requirement.

  • Your code must be MIT licensed.

  • Judging will be completely at the whim of myself and John MacFarlane. Our decisions will be capricious, arbitrary, probably nonsensical, and above all, final.

  • We'll run this contest for a period of one month, from today until April 28th, 2015.

  • If I have hastily left out any clarifying rules I should have had, they will go here.

Of course, the real reward for building is the admiration of your peers, and the knowledge that an entire generation of people will grow up learning basic Markdown skills through your contribution to a global open source project.

But on top of that, I am offering … fabulous prizes!

  1. Let's start with my Recommended Reading List. I count sixteen books on it. As long as you live in a place Amazon can ship to, I'll send you all the books on that list. (Or the equivalent value in an Amazon gift certificate, if you happen to have a lot of these books already, or prefer that.)

  2. Second prize is a CODE Keyboard. This can be shipped worldwide.

  3. Third prize is you're fired. Just kidding. Third prize is your choice of any three books on my reading list. (Same caveats around Amazon apply.)

Looking for a place to get started? Check out:

If you want privacy, you can mail your entries to me directly (see the about page here for my email address), or if you are comfortable with posting your contest entry in public, I'll create a topic on talk.commonmark for you to post links and gather feedback. Leaving your entry in the comments on this article is also OK.

We desperately need a great place that we can send everyone to learn Markdown, and we need your help to build it. Let's give this a shot. Surprise and amaze us!

[advertisement] Stack Overflow Careers matches the best developers (you!) with the best employers. You can search our job listings or create a profile and even let employers find you.

by Jeff Atwood at March 28, 2015 12:19 AM

HN Daily

March 27, 2015

StackOverflow

ZMQ on Xampp (Windows)

I've got a problem with the installation of the ZMQ Extension for PHP 5.5 on Windows. I've successfully downloaded the file php_zmq.dll and libzmq.dll for Pecl. But if I'm trying to register the extension (I moved the files php_zmq.dll and libzmq.dll to the extension folder of PHP (C:\xampp\php\ext) and restarting the server, there always comes the message that libzmq.dll is required. But both files are in the same directory. I've entered this in the php.ini: extension=php_zmq.dll

I hope someone can help me. If you need some information, you can tell it to me.

Thanks

by user3127892 at March 27, 2015 11:59 PM

Spark word count example : why error: value split is not a member of Char at the REPL?

From the Spark example (https://spark.apache.org/examples.html) , the code looks like:

    val file = spark.textFile("hdfs://...")
     val counts = file.flatMap(line => line.split(" "))
                 .map(word => (word, 1))
                 .reduceByKey(_ + _)

And works when compiled. However, if i try this exact code at the Spark REPL:

scala> val lines = "abc def"
lines: String = abc def

scala> val words = lines.flatMap(_.split(" "))
<console>:12: error: **value split is not a member of Char**
       val words = lines.flatMap(_.split(" "))
                                   ^

What gives??

thanks Matt

by matthieu lieber at March 27, 2015 11:49 PM

`.split` method in clojure returns unexpected results

For part of a class project I need to read in a file representing a graph in Clojure. Here is a link to an example file. The file structure for all the files I could possibly read in are such

c Unknown number of lines
c That start with "c" and are just comments
c The rest of the lines are edges
e 2 1
e 3 1
e 4 1
e 4 2
e 4 3
e 5 1
e 5 2

The issue that I am having is trying to split a line based on spaces. In my REPL I have done

finalproject.core> (.split "e 1 2" " ")
#<String[] [Ljava.lang.String;@180f214>

Which, I am not sure what it means exactly.. I think it refers to a memory locations of a String[] I am not sure why it is displayed like that though. If the insert a # in front of the split string, which I think denotes it is a regular expression I receive an error

finalproject.core> (.split "e 1 2" #" ")
ClassCastException java.util.regex.Pattern cannot be cast to java.lang.String 

Currently my entire implementation of this module is, which I am pretty sure will work if I could properly use the split function.

(defn lineToEdge [line]
  (cond (.startsWith line "e")
        (let [split-line (.split line " ")
              first-str (split-line 1)
              second-str (split-line 2)]
          ((read-string first-str) (read-string second-str)))))

(defn readGraphFile [filename, numnodes]
  (use 'clojure.java.io)
  (let [edge-list 
        (with-open [rdr (reader filename)]
          (doseq [line (line-seq rdr)]
           (lineToEdge line)))]
    (reduce add-edge (empty-graph numnodes) edge-list)))

I have not had a chance to test readGraphFile in any way but when I try to use lineToEdge with some dummy input I receive the error

finalproject.core> (lineToEdge "e 1 2")
ClassCastException [Ljava.lang.String; cannot be cast to clojure.lang.IFn

Suggestions as to where I went wrong?

by KDecker at March 27, 2015 11:48 PM

CompsciOverflow

VC Dimension for open intervals

I'm having difficulty calculating VC dimensions for these scenarios, any help is appreciated.

  1. $H = [a,\infty)$,I think it's 1 since if the label is less than a it's not gonna shatter.

  2. One sided interval like $H = (-\infty, a) or [a,\infty)$, I am not sure about this one, but I'd guess 1 due to the same reason.

  3. Finite union of one-sided intervals like #2, I'd guess it's K for K intervals.

Can someone please let me know if my answers for 1 and 2 are right, and the reasoning why. Any hint for #3 would be very helpful. Thanks.

by AD.Net at March 27, 2015 11:32 PM

StackOverflow

SBT cannot append Seq[Object] to Seq[ModuleID]

SBT keeps failing with improper append errors. Im using the exact format of build files I have seen numerous times.

build.sbt:

lazy val backend = (project in file("backend")).settings(
name := "backend",
libraryDependencies ++= (Dependencies.backend)
).dependsOn(api).aggregate(api)

dependencies.scala:

import sbt._

object Dependencies {

lazy val backend = common ++ metrics

val common = Seq(
"com.typesafe.akka" %% "akka-actor" % Version.akka,
"com.typesafe.akka" %% "akka-cluster" % Version.akka,
"org.scalanlp.breeze" %% "breeze" % Version.breeze,
"com.typesafe.akka" %% "akka-contrib" % Version.akka,
"org.scalanlp.breeze-natives" % Version.breeze,
"com.google.guava" % "guava" % "17.0"
)

val metrics = Seq("org.fusesource" % "sigar" % "1.6.4")

Im Im not quite why SBT is complaining

error: No implicit for Append.Values[Seq[sbt.ModuleID], Seq[Object]] found,
so Seq[Object] cannot be appended to Seq[sbt.ModuleID]
libraryDependencies ++= (Dependencies.backend)
                    ^

by Azeli at March 27, 2015 11:28 PM

Python style decorator in Scala

In Python I can do something like this:

def wrap(f):
    def wrapper(*args, **kwargs):
        print "args: ", args, kwargs
        res = f(*args, **kwargs)
        print "result: ", res
        return res
    return wrapper

This lets me wrap any function regardless of the arguments they take. For instance:

In [8]: def f(thing):
    print "in f:", thing
    return 3

In [9]: wrapped_f = wrap(f)

In [10]: wrapped_f(2)
args:  (2,) {}
in f: 2
result:  3
Out[10]: 3

Is there a way to do something similar (write a wrapper that can be applied to any function regardless of its input/output types) in Scala?

by Jules Olléon at March 27, 2015 11:23 PM

MLlib classification example stops in stage 1

[reposted from spark community]

Hello Everyone,

I am trying to run this MLlib example from Learning Spark: https://github.com/databricks/learning-spark/blob/master/src/main/scala/com/oreilly/learningsparkexamples/scala/MLlib.scala#L48

Things I'm doing differently:

1) instead of their spam.txt and normal.txt I have text files with 200 words...nothing huge at all and just plain text, with periods, commas, etc.

3) I've used numFeatures = 200, 1000 and 10,000

Error: I keep getting stuck when I try to run the model (based off details from ui below):

val model = new LogisticRegressionWithSGD().run(trainingData)

It will freeze on something like this:

[Stage 1:==============> (1 + 0) / 4]

Some details from webui:

org.apache.spark.rdd.RDD.count(RDD.scala:910)
org.apache.spark.mllib.util.DataValidators$$anonfun$1.apply(DataValidators.scala:38)
org.apache.spark.mllib.util.DataValidators$$anonfun$1.apply(DataValidators.scala:37)
org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm$$anonfun$run$2.apply(GeneralizedLinearAlgorithm.scala:161)
org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm$$anonfun$run$2.apply(GeneralizedLinearAlgorithm.scala:161)
scala.collection.LinearSeqOptimized$class.forall(LinearSeqOptimized.scala:70)
scala.collection.immutable.List.forall(List.scala:84)
org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm.run(GeneralizedLinearAlgorithm.scala:161)
org.apache.spark.mllib.regression.GeneralizedLinearAlgorithm.run(GeneralizedLinearAlgorithm.scala:146)
$line21.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33)
$line21.$read$$iwC$$iwC$$iwC.<init>(<console>:38)
$line21.$read$$iwC$$iwC.<init>(<console>:40)
$line21.$read$$iwC.<init>(<console>:42)
$line21.$read.<init>(<console>:44)
$line21.$read$.<init>(<console>:48)
$line21.$read$.<clinit>(<console>)
$line21.$eval$.<init>(<console>:7)
$line21.$eval$.<clinit>(<console>)
$line21.$eval.$print(<console>)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

I am not sure what I am doing wrong...any help is much appreciated, thank you!

by SparkKafkaSetup at March 27, 2015 11:19 PM

TheoryOverflow

Sample complexity of distinguishing two Gaussian distributions?

Below is a description of the problem:


Suppose I have two $p$-dimensional Gaussian distributions with the same covariance matrix $\Sigma$ and means $\mu_1$, $\mu_0$.

And I can get $n$ samples $X_1^{(1)}, X_1^{(2)}, \dots, X_1^{(n)}$ from $\mathcal{N}(\mu_1, \Sigma)$, and another $n$ samples $X_0^{(1)}, X_0^{(2)}, \dots, X_0^{(n)}$ from $\mathcal{N}(\mu_0, \Sigma)$.

Then I flip an unbiased coin $b\in\{1,0\}$, and take a sample $s$ from $\mathcal{N}(\mu_b,\Sigma)$.

I then give the samples $X_1^{(1)}, X_1^{(2)}, \dots, X_1^{(n)}$ and $X_0^{(1)}, X_0^{(2)}, \dots, X_0^{(n)}$ to an algorithm with the sample $s$. The algorithm outputs a bit $b'$ to indicate that the algorithm thinks that $s$ is from $\mathcal{N}(\mu_{b'},\Sigma)$.

Note that the algorithm does not know $\Sigma$, $\mu_1$, $\mu_0$.

The question is: How large should $n$ be so that $\Pr[b=b'] \geq \frac{1}{2}+\epsilon$?

(The bound should be given in terms of $\epsilon$, $\Sigma$, $\mu_1$, $\mu_0$.)


Are there any papers that have solutions to this problem?

by Tianyang Li at March 27, 2015 11:07 PM

Planet Emacsen

(or emacs: recenter-positions, that's not how gravity works!

Yesterday, I've added a new binding to swiper-map: C-l will now call swiper-recenter-top-bottom. The implementation is really easy, almost nothing to write home about:

(defun swiper-recenter-top-bottom (&optional arg)
  "Call (`recenter-top-bottom' ARG) in `swiper--window'."
  (interactive "P")
  (with-selected-window swiper--window
    (recenter-top-bottom arg)))

An interesting thing that I want to mention though is the customization of the default recenter-top-bottom behavior. This is the default one:

(setq recenter-positions '(middle top bottom))

And this is the logical one that I'm using:

(setq recenter-positions '(top middle bottom))

Try it out, and see if it makes sense to you. For me, when I've just jumped to a function definition, which usually means that the point is on the first line of the function, the first recenter position has to be top, since that will maximize the amount of the function body that's displayed on the screen.

Another use-case is when I'm reading an info or a web page. After a recenter to top, all that I have read is scrolled out of view, and I can continue from the top.

by (or emacs at March 27, 2015 11:00 PM

TheoryOverflow

tie breaker algorithm delay condition

I read from the book "Foundations of multithreaded parallel and distributed programming" (Andrews), that it is possible for tie-breaker algorithm to implement "await statements", used for conditional synchronization with busy-waiting loops, if it satisfy "At-most once property". As he said, delay doesn't have to be evaluated atomically. Why aren't there problems if we take a latest value with a previous one? (I'm talking about the two critical references from the algorithm)

by johnny_kb at March 27, 2015 10:54 PM

StackOverflow

Speclj not working with lighttable?

Today I had a look on speclj and added it to my project.clj according to the documentation. After I finished my first spec, I hit crtl+enter after (run-specs) and got following exception:

Failed trying to require overseer.parse-spec with:
java.lang.ClassNotFoundException: speclj.platform.SpecFailure
URLClassLoader.java:372 java.net.URLClassLoader$1.run
URLClassLoader.java:361 java.net.URLClassLoader$1.run
   (Unknown Source) java.security.AccessController.doPrivileged
URLClassLoader.java:360 java.net.URLClassLoader.findClass
DynamicClassLoader.java:61 clojure.lang.DynamicClassLoader.findClass
ClassLoader.java:424 java.lang.ClassLoader.loadClass
ClassLoader.java:357 java.lang.ClassLoader.loadClass
    (Unknown Source) java.lang.Class.forName0
      Class.java:340 java.lang.Class.forName
        RT.java:2065 clojure.lang.RT.classForName
   Compiler.java:978 clojure.lang.Compiler$HostExpr.maybeClass
   Compiler.java:756 clojure.lang.Compiler$HostExpr.access$400
  Compiler.java:2540 clojure.lang.Compiler$NewExpr$Parser.parse

Afterwards I used leiningen on the command line and with lein test it's working smoothly. Is there anything I have to take care of when I use speclj with lighttable?

by u6f6o at March 27, 2015 10:33 PM

CompsciOverflow

Primitive Recursion and course-of-values recursion - examples?

I ran into examples that I not trivially understand on course-of-values recursion,

In defining a function by primitive recursion, the value of the next argument $f(n+1)$ depends only on the value of the current argument $f⁢(n)$. Definition of functions by course-of-values recursion, $f⁢(n+1)$ depends on value(s) of some or all of the preceding arguments $f⁢(n),…,f⁢(0)$. very basic examples of definition by course-of-values recursion are Fibonacci numbers...

My examples, from the computation course, wrote following $f_1$ and $f_2$ is Primitive recursive. I reproduce them here:

Let $g$ be a primitive recursive function,

  1. $f_1(0)=c_1, f_1(1)=c_2, f_1(x+2)=g(x,f_1(x),f_1(x+1))$, and

  2. $f_2(x)=c, f_2(x+1)=g(x,[f_2(0),...,f_2(x)])$ are primitive recursive.

I couldn't say how $f_1$ and $f_2$ are Primitive Recursive? any idea to finding these are P.R.?

Edit 1: after the Yuval Hint the $f_1$ is easy, but for $f_2$ there is a problem, I try more than three days, but not capable to create an easy encode for $f_2$. any hint or idea?

by LogicLove at March 27, 2015 10:21 PM

StackOverflow

Why hasn't functional programming taken over yet?

I've read some texts about declarative/functional programming (languages), tried out Haskell as well as written one myself. From what I've seen, functional programming has several advantages over the classical imperative style:

  • Stateless programs; No side effects
  • Concurrency; Plays extremely nice with the rising multi-core technology
  • Programs are usually shorter and in some cases easier to read
  • Productivity goes up (example: Erlang)

  • Imperative programming is a very old paradigm (as far as I know) and possibly not suitable for the 21st century

Why are companies using or programs written in functional languages still so "rare"?

Why, when looking at the advantages of functional programming, are we still using imperative programming languages?

Maybe it was too early for it in 1990, but today?

by pankrax at March 27, 2015 10:19 PM

Should/can I use `assoc` in this function to redefine a function argument?

I am implementing the Bron-Kerbosch algorithm in Clojure for a class project and having some issues. The issue lies in the final lines of the algorithm

BronKerbosch1(R, P, X):
 if P and X are both empty:
       report R as a maximal clique
   for each vertex v in P:
       BronKerbosch1(R ⋃ {v}, P ⋂ N(v), X ⋂ N(v))
       P := P \ {v} ;This line
       X := X ⋃ {v} ;This line

I know in Clojure there is no sense of "set x = something". But do know there is the assoc function which I think is similar. I would like to know if assoc would be appropriate to complete my implementation.

In my implementation graphs are represented as so

[#{1 3 2} #{0 3 2} #{0 1 3} #{0 1 2}]

Where the 0th node is represented as the first set in the vector, and the values in the set represent edges to other nodes. So that above represents a graph with 4 nodes that is complete (all nodes are connected to all other nodes).

So far my algorithm implementation is

(defn neighV [graph, v]
  (let [ret-list (for [i (range (count graph)) :when (contains? (graph i) v)] i)]
    ret-list))

(defn Bron-Kerbosch [r, p, x, graph, cliques]
  (cond (and (empty? p) (empty? x)) (conj cliques r)
        :else
        (for [i (range (count p))]
          (conj cliques (Bron-Kerbosch (conj r i) (disj p (neighV graph i) (disj x (neighV graph i)) graph cliques)))
          )))

So right now I am stuck altering p and x as per the algorithm. I think that I can use assoc to do this but I think it only applies to maps. Would it be possible to use, could someone recommend another function?

by KDecker at March 27, 2015 10:14 PM

How can I adapt between non zmq socket and pyzmq?

I want to write a bridge adapt between non ZMQ socket and ZMQ socket.

client code:

import socket

if __name__ == '__main__':

    HOST = "localhost"
    PORT = 8888
    BUFFER = 4096

    try:
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        print sock
        ret = sock.connect((HOST, PORT))
        print ret

        ret = sock.send('hello, tcpServer!')
        print ret
        recv = sock.recv(BUFFER)
        print ('[tcpServer siad]: %s' % recv)
        sock.close()
    except e:
        print e

proxy code, use this proxy to send request to ZMQ_REP server.

import zmq

if __name__ == '__main__':

    context = zmq.Context()
    socket = context.socket(zmq.STREAM)
    socket.bind("tcp://*:8888")

    socket_req = context.socket(zmq.REQ)
    socket_req.connect("tcp://localhost:5556")

    while True:
        clientid, message = socket.recv_multipart();

        print("id: %r" % clientid)
        print("request:",message.decode('utf8'))

        socket_req.send(clientid, flags=zmq.SNDMORE, copy=False)
        socket_req.send("Hi", copy=False)

        clientid, message = socket_req.recv_multipart()


        print("id: %r" % clientid)
        print("request:",message.decode('utf8'))

ZMQ_REP server code:

import zmq
import time
import sys


if __name__ == '__main__':

    port = '5556'
    if len(sys.argv) > 1:
        port = sys.argv[1]
        int(port)

    context = zmq.Context()
    socket = context.socket(zmq.REP)
    socket.bind("tcp://*:%s" % port)

    while True:
        message = socket.recv()
        print "Received request: ", message
        time.sleep(1)
        socket.send("world from %s" % port)

the REQ get error:

Received request:  k
Traceback (most recent call last):
  File "req_server.py", line 21, in <module>
    socket.send("world from %s" % port)
  File "zmq/backend/cython/socket.pyx", line 574, in zmq.backend.cython.socket.Socket.send (zmq/backend/cython/socket.c:5434)
  File "zmq/backend/cython/socket.pyx", line 621, in zmq.backend.cython.socket.Socket.send (zmq/backend/cython/socket.c:5196)
  File "zmq/backend/cython/socket.pyx", line 181, in zmq.backend.cython.socket._send_copy (zmq/backend/cython/socket.c:2035)
  File "zmq/backend/cython/checkrc.pxd", line 21, in zmq.backend.cython.checkrc._check_rc (zmq/backend/cython/socket.c:6248)
zmq.error.ZMQError: Operation cannot be accomplished in current state

by hangkongwang at March 27, 2015 10:10 PM

TheoryOverflow

Good algorithms to solve ATSP

What are some good neighborhood-based local search algorithms or strategies to solve the Asymmetric TSP ? I see many 2-OPT and K-opt based algorithms (e.g. Lin-Kernighan implementations), but I think that these algorithms are more time consuming since the evaluation complexity is not constant (assuming my calculations are correct).

What are some good alternative that are less time consuming? are their any technique swapping nodes for example? Could you suggest a good paper about such algorithms?

by yafrani at March 27, 2015 10:08 PM

StackOverflow

How to get the top-k frequent words in spark without sorting?

In spark, we could easily use map reduce to count the word appearance time, and use sort to get the top-k frequent words,

1) sort locally inside node, keep only top-K results => no network communication

       val partialTopK = wordCount.mapPartitions(it => {
         val a = it.toArray
         a.sortBy(-_._2).take(10).iterator
       }, true)

   2) collect local top-k results

       val collectedTopK = partialTopK.collect
       collectedTopK.size              // 940

       Total time: 8.68 secs           // faster than the naive solution


   3) compute global top-k at master

       val topK = collectedTopK.sortBy(-_._2).take(10)

       No communication, everything done on the master node

But I want to know is there a better solution that avoid sorting at all?

by rockmerockme at March 27, 2015 10:02 PM

CompsciOverflow

Edges of minimum spanning trees.

Is as edge of a minimum spanning tree in a graph $G$ the shortest path between its incident vertices? I think this might be true due to the cut property of but I am unable to prove it.

by smilingbuddha at March 27, 2015 09:47 PM

In which pipeline stage are exceptions detected?

Do you immediatly handle an exception when it occurs (for example an overflow exception in the EX stage) or do you wait until the final pipeline stage and then check whether any interrupts had occured?

by gilianzz at March 27, 2015 09:46 PM

StackOverflow

importing scala classes into matlab?

I haven't looked very deeply at all at this, but I've recently learned that it is possible to import Java classes into the matlab workspace. This actually made me wonder if it was possible to import Scala classes as well. A quick Google search didn't reveal much. I was wondering if this was possible.

by user27886 at March 27, 2015 09:45 PM

TheoryOverflow

What is a "level-r pseudo expectation functional"?

In the context of the SOS hierarchy papers, it seems that a "level-r psuedo expectation functional" is the same as an operator taking expectations of functions just that this one has the restriction that expectation of a square of a function is guaranteed to be zero only when the function is a polynomial of degree $\leq \frac{r}{2}$

  • Is the above right?

  • So the polynomials on which one would take the "level-r pseudo expectation functional" are what are to be called "level-r fictitious random variable" ?

  • Conventionally in optimization questions one would say something like "maximize $P_0$ given that $P_i^2=0$ for $i =1,2,..,m$" but the ``r-round SOS SDP relaxation" of this same question would be to choose a "level-r pseudo expectation functional" say $\tilde{E}$ and say "maximize $\tilde{E}[P_0]$ given that $\tilde{E}[P_i^2]=0$ for $i =1,2,..,m$ for $deg(P_i) \leq \frac{r}{2}$"

Is the above right? And if so then how is a specific $\tilde{E}$ chosen to do the relaxation?

by user6818 at March 27, 2015 09:45 PM

UnixOverflow

Network fail on FreeBSD: Ping to router fails, but router believes computer is connected

I have a TP-Link TL-WN851N wireless adapter, which is based on an Atheros device. When I attempt to connect to my WPA2 wireless network, ifconfig wlan0 tells me that the connection is 'associated'. My computer also shows up as connected in the list on the router. However, I can not ping anything, even the router itself.

On the same system, running Linux, there are no connection problems, and running Windows, there are occasional dropped connections, but no failure to reconnect. DHCP is noticeably slow on both of these however.

After doing some debugging with people on the #freebsd channel on Freenode, I have found the following:

  • arp -an shows up no routes.
  • If I attempt to get an IP address from DHCP, it fails. On the FreeBSD system, it shows DHCPDISCOVER, then gives an error about no DHCPOFFER. According to my router's web interface, it believes it has given the computer an IP address after this.

by Macha at March 27, 2015 09:36 PM

CompsciOverflow

What is the algorithm to add 2 binary numbers with boolean operations?

What is the algorithm to add up 2 binary numbers when the basis is {negation, conjunction, disjunction} in linear time? Also the program needs to be linear as well, meaning there can only be assignments involved.

One example of such program would be Karatsuba's algorithm for multiplying 2 numbers. Here's the algorithm:

x = a * 2^(n/2) + b

y = c * 2^(n/2) + d

z = x * y = (a * 2^(n/2) + b) * (c * 2^(n/2) + d) = ac * 2^n + (ad + bc) * 2^(n/2) + bc

u = (a + b) (c + d)

v = a * c

w = w * d

z = v * 2^n + (u - v - w) * 2^(n/2) + w // result

Example: x = 1011, y = 1101

u = (a+b)(c+d) = 101 * 100

v = 10 * 11 = 110

w = b * d = 11 * 01 = 11

z = 110 * 2^4 + (10100 - 110 - 11) * 2^2 + 11 = 10001111 = 143 base 2

by paulpaul1076 at March 27, 2015 09:30 PM

Recursive algorithm to compute a sum of product like function

I am working on a recursive formula associated with discrete mathematics which seems very difficult to compute. The formula is as follows

$F_{i,j}(m)=\sum_{t=j}^{m}\left [ x_{ij}.\sum_{k=1}^{m}\sum_{l=j-1}^{m} F_{k,l}(m-ij) \right ]$

It's given that $F_{i,j}(m)=0$ for $m<0$.

Here $x_{ij}$ is defined as a random number with the property that $0<x_{ij}<j$.

How do I write a recursive algorithm to compute $F_{i,j}(m)$?

by precision at March 27, 2015 09:25 PM

StackOverflow

How to implement a short-circuit with IO monad in Scala

I use a standard IO monad.

And at some point, i need to shortcircuit. On a given condition, i don't want to run the following ios.

Here is my solution, but I found it too verbose and not elegant :

  def shortCircuit[A](io: IO[A], continue: Boolean) =
    io.map(a => if (continue) Some(a) else None)

    for {
      a <- io
      b <- shortCircuit(io, a == 1)
      c <- shortCircuit(io, b.map(_ == 1).getOrElse(false))
      d <- shortCircuit(io, b.map(_ == 1).getOrElse(false))
      e <- shortCircuit(io, b.map(_ == 1).getOrElse(false))
    } yield …

For example, for 3rd, 4th and 5th line, I need to repeat the same condition.

Is there a better way ?

by Yann Moisan at March 27, 2015 09:20 PM

/r/compsci

Planet Clojure

Extending Prismatic Schema to Higher Order Functions

Over the past year I have used Prismatic Schema extensively on a large Clojure project. We really like the aid that if offers in understanding, debugging, and using our code. However, we regularly feel a bit of a let down when an argument to a function is another function. Prismatic Schema doesn’t allow you to say much in these cases beyond: this arg is a function.

To address this we extended Prismatic Schema to allow us to add type annotations to the function arguments in higher-order functions (in addition to several other small extensions). This is done by calling s/fn which expects [output-schema & input-schemas]

Usage examples and details of how it works are here

The type checking for the most part is simply checking that the type of the input function exactly matches the declared type. So the checker largely ignores issues of type hierarchies, covariance, contravariance, etc. As chouser pointed out to me the basic issue this feature raises is that instead of comparing parameter instance objects to declared argument types, this feature requires comparing declared types to declared types. This is one way in which the semantics of this feature are different than “normal” Prismatic schema.

I am torn about the code. On the one hand I am dissatisfied that it is not a full, general solution. On the other hand I can shrug and recognize that for a class of functions and usage patterns it works just fine. So while from a type theoretic perspective it is quite limited, we have started using it and getting benefit from it.

by David McNeil at March 27, 2015 09:10 PM

DragonFly BSD Digest

Keymap details

If you’re looking to change your DragonFly system’s keymapping to support a non-US character set, use this users@ post from Adolf Augustin as a cheat sheet to make all the right changes.

by Justin Sherrill at March 27, 2015 09:06 PM

Fefe

Die gute Nachricht: Die Bundeswehr-Fregatten-Bordkanonen ...

Die gute Nachricht: Die Bundeswehr-Fregatten-Bordkanonen funktionieren.

Die schlechte Nachricht: Das wissen wir, weil die vor Kapstadt versehentlich ein Motorboot beschossen haben.

Die andere schlechte Nachricht: Sie trafen nicht. Man würde denken, dass unsere Marine im Notfall in der Lage wäre, ein Motorboot zu versenken. Also jetzt eines, von dem eine Gefahr ausgeht, meine ich.

March 27, 2015 09:00 PM

StackOverflow

max function on list do not work in scala?

There is a snippet of scala code,which I think quite easy

val m1 = Map("age"->60,"name"->"x")
val m2 = Map("age"->99,"name"->"j")
val l = List(m1,m2)
val max = l.maxBy(_("age"))

However, instead of the expecting result val m2 = Map("age"->99,"name"->"j") I get an error:

<console>:13: error: No implicit Ordering defined for Any.

I know there is something wrong about the implicit parameter,but I don't know how to solve this problem.

update further,suppose I need a more general solution for this,a function

def max(l:List[Map[String,Any]],key:String)

then max(l,"age") == Map("age"->99,"name"->"j") max(l,"name") == Map("age"->60,"name"->"x")

by Jade Tang at March 27, 2015 08:55 PM

Saving as Text in Spark 1.30 using Dataframes in Scala

I am using Spark version 1.3.0 and using dataframes with SparkSQL in Scala. In version 1.2.0 there was a method called "saveAsText". In version 1.3.0 using dataframes there is only a "save" method. The default output is parquet.
How can I specify the output should be TEXT using the save method ?

// sc is an existing SparkContext.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// this is used to implicitly convert an RDD to a DataFrame.
import sqlContext.implicits._

// Define the schema using a case class.
// Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit,
// you can use custom classes that implement the Product interface.
case class Person(name: String, age: Int)

// Create an RDD of Person objects and register it as a table.
val people = sc.textFile("examples/src/main/resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)).toDF()
people.registerTempTable("people")

// SQL statements can be run by using the sql methods provided by sqlContext.
val teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")

teenagers.save("/user/me/out")

by jeffrey podolsky at March 27, 2015 08:54 PM

Template is not overriding after the default installation of tomcat7

I have simple ansible role that do following tasks:

  1. Install tomcat7
  2. Update the /etc/default/tomcat7 - this is for configuring heap and other configuration
  3. Update /etc/tomcat7/server.xml - this will override the tomcat port from 8080 to 80
  4. restart the tomcat service

This is how my role looks like:

- name: Update apt cache
  apt: update_cache=yes

- name: Install Tomcat 7
  apt: pkg=tomcat7 state=present

- name: Configure tomcat memory/java_home configuration
  template: src=tomcat7.j2 dest=/etc/default

- name: Configure tomcat server configuration, port, connections ssl etc
  template: src=server.xml.j2 dest=/etc/tomcat7

 notify: 
- tomcat7-restart

This file stored in roles/task and my template in roles/template

When i run the playbook, I dont see any error or warning, but when go and check the actual file its not updated, showing the default content that comes with the tomcat7 installation.

Please let me know if you guys have any idea what I am doing wrong here!

by geek at March 27, 2015 08:48 PM

How to compute the height profile of a Tetris stack most efficiently?

Problem Statement

We're given an array of Integers stack of length height. The width tells us that at most the width-lowest bits in each entry of xs are set.

Compute an array profile of length width such that profile[i] == max_i with: max_i is maximal with stack[max_i] having the i-th bit set.

How can I achieve this in a more efficient way than below?

Current solution

Currently I go over the columns and check each bit separately.

Scala code:

def tetrisProfile(stack: Array[Int]): Array[Int] = {
  var col = 0
  val profile = new Array[Int](width)
  while(col < width){
    var row = 0
    var max = 0
    while(row < height){
      if(((stack(row) >> col) & 1) == 1)
        max = row + 1
      row += 1
    }
    profile(col) = max
    col += 1
  }
  return profile
}

Typical values

  • array size height is 22
  • width width is 10

by ziggystar at March 27, 2015 08:48 PM

UnixOverflow

Install man on FreeBSD 10.1 based pfsense 2.2

I'm running a pfsense 2.2 router. While not required most of the time, sometimes using the console is the way to go. Unfortunately pfsense doesn't come with

/usr/bin/man

I was able to run

pkg
pkg update

though. Is there some way to install "man" using pkg? If it is, where would I download the man pages for my system?

I've already read How can I install man pages from FreeBSD server via console command? and I will try that approach if using pkg isn't an option. The later approach is just harder to automate.

by noamik at March 27, 2015 08:32 PM

StackOverflow

How does the pyspark mapPartitions function work?

So I am trying to learn Spark using Python (Pyspark). I want to know how the function mapPartitions work. That is what Input it takes and what Output it gives. I couldn't find any proper example from the internet. Lets say, I have an RDD object containing lists, such as below.

[ [1, 2, 3], [3, 2, 4], [5, 2, 7] ] 

And I want to remove element 2 from all the lists, how would I achieve that using mapPartitions.

by MetallicPriest at March 27, 2015 08:28 PM

Planet Emacsen

Jorgen Schäfer: Buttercup 1.0 released

I just released version 1.0 of Buttercup, the Behavior-Driven Emacs Lisp Testing framework.

Buttercup is a behavior-driven development framework for testing Emacs Lisp code. It is heavily inspired by Jasmine.

Installation and Use

Buttercup is available from Marmaladeand MELPA Stable.

Example test suite:

(describe "A suite"
(it "contains a spec with an expectation"
(expect t :to-be t)))

See the package homepage above for a full description of the syntax for test suites and specs.

If placed in a file named like my-test.el, this command executed in the same directory will run the suite:


emacs -batch -l buttercup.el -f buttercup-run-discover

by Jorgen Schäfer (noreply@blogger.com) at March 27, 2015 08:12 PM

StackOverflow

Ansible - how ansible_env.PATH is set in SSH session

I was trying to execute a simple "python --version" command using ansible and it was not working no matter how i tried:via shell module, via command module, via script module, via playbook or ad-hoc.

I'm always getting an error: unknown option: --

for example playbook:

---

- name: testing
  hosts: myhost
  sudo: False

  tasks:
       - name: python version
         shell: python --version

Then I realized that this is due to the fact how ansible loads the environment in SSH session. Effectively the error was not coming from the ansible or command parsing, but from python version 2.4 that somehow gets in the PATH (/usr/local/bin).

Seems like python 2.4 didn't know the "--version" command.

The most interesting part is that when I'm performing SSH to the same host as the same user as ansible does i'm getting PATH elements in correct order and the first location where python exists is the right one with python3, while the /usr/local/bin is buried deep in the PATH.

But when I have added a 'which python' task into the playbook I saw that ansible resolves python from /usr/local/bin and that is the old one (v2.4)

WHen i execute the ansible myhost -m setup i can see that the ansible_env.PATH variable is way shorter then the PATH i'm getting by logging in directly. It would be nice to understand the rules of how this is getting set up.

exactly the same question was asked here: http://grokbase.com/t/gg/ansible-project/1479n0d0qp/ansible-env-path-how-is-it-set but there was no definite answer.

by Eugene Sajine at March 27, 2015 08:10 PM

Using Reader Monad for Dependency Injection

I recently saw the talks Dead-Simple Dependency Injection and Dependency Injection Without the Gymnastics about DI with Monads and was impressed. I tried to apply it on a simple problem, but failed as soon as it got non-trivial. I really would like to see a running version of dependency injection where

  • a class that depends on more than one value that has to be injected
  • a class that depends on a class that depends on something to be injected

as in the following example

trait FlyBehaviour { def fly() }
trait QuackBehaviour { def quack() }
trait Animal { def makeSound() }

// needs two behaviours injected
class Duck(val flyBehaviour: FlyBehaviour, val quackBehaviour: QuackBehaviour) extends Animal 
{
   def quack() = quackBehaviour.quack()
   def fly() = flyBehaviour.fly()
   def makeSound() = quack()
}

// needs an Animal injected (e.g. a Duck)
class Zoo(val animal: Animal)

// Spring for example would be able to provide a Zoo instance
// assuming a Zoo in configured to get a Duck injected and
// a Duck is configured to get impl. of FlyBehaviour and QuackBehaviour injected
val zoo: Zoo = InjectionFramework.get("Zoo")
zoo.animal.makeSound()

It would be really helpful to see a sample implementation using the reader Monad since I just feel that I am missing a push in the right direction.

Thanks!

by Manuel Schmidt at March 27, 2015 08:05 PM

Planet Clojure

Survey, etc

Just a few quick notes on recent Immutant happenings...

Survey

Yesterday we published a short survey to help us gauge how folks have been using Immutant. Please take a few moments to complete it if you haven't already.

Luminus

The Luminus web toolkit now includes an Immutant profile in its Leiningen project template, so you can now do this:

$ lein new luminus yourapp +immutant
      $ cd yourapp
      $ lein run -dev
      

That -dev option is triggering the use of immutant.web/run-dmc instead of immutant.web/run so it should plop you in your browser with code-reloading enabled. You can pass most of the other run options on the command line as well, e.g.

$ lein run port 3000
      

Beta2 bugs

In our last release, 2.0.0-beta2, we updated our dependency on the excellent potemkin library to version 0.3.11. Unfortunately, that exposed a bug whenever clojure.core/find was used on our Ring request map. Fortunately, it was already fixed in potemkin's HEAD, and Zach was kind enough to release 0.3.12. We've bumped up to that in our incrementals and hence our next release.

We've also fixed a thing or two to improve async support when running inside WildFly.

Plans

We're still hoping to release 2.0.0-Final within a month or so. Now would be a great time to kick the tires on beta2 or the latest incremental to ensure it's solid when we do!

by Jim Crossley at March 27, 2015 08:01 PM

Immutant 2 (The Deuce) Beta2 Released

We're just bananas to announce The Deuce's second beta: Immutant 2.0.0-beta2. At this point, we feel pretty good about the stability of the API, the performance, and the compatibility with both WildFly 8 and the forthcoming WildFly 9.

We expect a final release before spring (in the Northern Hemisphere). We would appreciate all interested parties to try out this release and submit whatever issues you find. And again, big thanks to all our early adopters who provided invaluable feedback on the alpha, beta, and incremental releases.

What is Immutant?

Immutant is an integrated suite of Clojure libraries backed by Undertow for web, HornetQ for messaging, Infinispan for caching, Quartz for scheduling, and Narayana for transactions. Applications built with Immutant can optionally be deployed to a WildFly cluster for enhanced features. Its fundamental goal is to reduce the inherent incidental complexity in real world applications.

What's changed in this release?

The biggest change in this release is a new API for communicating with web clients asynchronously, either via an HTTP stream, over a WebSocket, or using Server-Sent Events. As part of this change, the immutant.web.websocket namespace has been removed, but wrap-websocket still exists, and has been moved to immutant.web.middleware. For more details, see the web guide.

In conjunction with this new API, we've submitted changes to Sente that will allow you to use its next release with Immutant.

For a full list of changes, see the issue list below.

How to try it

If you're already familiar with Immutant 1.x, you should take a look at our migration guide. It's our attempt at keeping track of what we changed in the Clojure namespaces.

The guides are another good source of information, along with the rest of the apidoc.

For a working example, check out our Feature Demo application!

Get It

There is no longer any "installation" step as there was in 1.x. Simply add the relevant dependency to your project as shown on Clojars. See the installation guide for more details.

Get In Touch

If you have any questions, issues, or other feedback about Immutant, you can always find us on #immutant on freenode or our mailing lists.

Issues resolved in 2.0.0-beta2

  • [IMMUTANT-439] - Provide SSE support in web
  • [IMMUTANT-515] - Add :servlet-name to the options for run to give the servlet a meaningful name
  • [IMMUTANT-517] - Allow undertow-specific options to be passed directly to web/run
  • [IMMUTANT-518] - Error logged for every websocket/send!
  • [IMMUTANT-520] - WunderBoss Options don't load properly under clojure 1.7.0
  • [IMMUTANT-521] - Add API for async channels
  • [IMMUTANT-524] - immutant.web/run no longer accepts a Var as the handler
  • [IMMUTANT-526] - Improve the docs for messaging/subscribe to clarify subscription-name

by The Immutant Team at March 27, 2015 08:01 PM

Fefe

Wo kommen eigentlich die Emissionslimits für Umweltverschmutzung ...

Wo kommen eigentlich die Emissionslimits für Umweltverschmutzung her?

Nun, in Australien setzen die Verschmutzer ihre Limits selber.

Dieser Ansatz firmiert in Deutschland immer unter "Selbstregulierung der Industrie".

March 27, 2015 08:01 PM

CompsciOverflow

How do I prove that 1 function is an upper bound of the other?

If for every $n > 0$ and some $b > 1$, $T(n) \le h(n)$ and $h(n) = O(h(n/b))$ then how can I prove that $T(n) = O(h(n))$, I understand that $T$ is bounded by $h$, so $h$ must be its upper bound, but how can I prove it? and what does $b$ have to do with anything?

by paulpaul1076 at March 27, 2015 07:57 PM

Lobsters

CompsciOverflow

Proving that any CF language over a 1 letter alphabet is regular

I would like to prove that any context free language over a 1 letter alphabet is regular. I understand there is Parikh's theorem but I want to prove this using the work I have done so far:

Let L be a context free language. So, $L$ satisfies the pumping lemma. Let $p$ be $L$'s pumping constant. Let $L = L_1 \cup L_2$ where $L_1$ consists of $w$ where $|w| < p$ and and $L_2$ consists of $w$ where $|w| >= p$. We have a single letter alphabet and since $L_1$ has a restriction on the length of its words, $L_1$ is a finite language. Finite languages are regular so $L_1$ is regular. If I can show that $L_2$ is regular, I can use the fact that the union of regular languages is regular. But I am struggling on showing that $L_2$ is regular. I know that since $w \in L_2$ has to satisfy $|w| >= p$, by the pumping lemma, $w$ can be written as $w = uvxyz$ where $|vxy| <= p$, $|vy| > 0$ and $\forall t >= 0$, $uv^txy^tz \in L$. Since we have a single letter alphabet (say the letter is $a$), $uv^txy^tz = uxz(vy)^t = uxz(a^{|vy|})^t \in L$. Now what?

by ilikecats at March 27, 2015 07:25 PM

Lobsters

CompsciOverflow

Sat Solving with conflict driven backtracking

I'm stuck at a question as It had no other UIP than the decision variable so , what to choose for a conflict clause first cut as a conflict clause or the decision variable. Is Taking cut as conflict clause and adding to the current clauses helps and it it does where to backtrack. I'm using conflict driven backtracking.

Too much confused with backtracking and using Berkmin decision heuristics on given set of clauses Thank you very much .

by Pushpa at March 27, 2015 07:10 PM

Skolemization with multiple arguments -- how to unify

Edit: answerers keep finding (valid!) problems with my example. I'll try again. The older version is below the horizontal line. Thanks to Klaus below for pointing out the last problem.

My question is how to unify skolemizations that have differing numbers of arguments. As he points out, there is rarely any need for this. For example, if the statements are

Some homes are blue.
There is a home with two stories.

The skolemizations here should not unify, because we can't know from this if a blue home is two-storied.

If we're going to ever need to unify one skolemization with another, we'll need to know we're talking about the same home. This would do it:

Jane's and Mark's home is blue (their only home).
Jane's home has two stories (her only home).

Here it is in FOPC:

$$ \exists x: has(Jane,x) \wedge has(Mark,x) \wedge home(x) \wedge blue(x) \wedge $$ $$ (\forall y: has(Jane,y) \wedge home(y) \implies x=y) \wedge $$ $$ (\forall y: has(Mark,y) \wedge home(y) \implies x=y) $$ and $$ \exists x: has(Jane,x) \wedge home(x) \wedge twoStories(x) \wedge (\forall y: (has (Jane,y) \wedge home(y) \implies x=y) $$

Converting to conjunctive normal form with skolemization gives us this set:

$$ has(Jane,skolem1(Jane,Mark)) $$ $$ has(Mark,skolem1(Jane,Mark)) $$ $$ home(skolem1(Jane,Mark)) $$ $$ blue(skolem1(Jane,Mark)) $$ $$ \neg has(Jane,y) \vee \neg home(y) \vee skolem1(Jane,Mark) =y $$ $$ \neg has(Mark,y) \vee \neg home(y) \vee skolem1(Jane,Mark)=y $$

$$ home(skolem2(Jane)) $$ $$ twoStories(skolem2(Jane)) $$ $$ \neg has (Jane,y) \vee \neg home(y) \vee skolem2(Jane)=y $$

Now, if I want to reason about whether there's a blue house with two stories -- which I should be able to start by resolving the first and last clauses in the set -- I can't, because the two skolemizations can't unify: they have different numbers of arguments.

Is there a good protocol of picking arguments to prevent this problem -- other than applying human intelligence after the mismatch is noticed?


Here is the old version. It's significantly different, but as Klaus pointed out, the old version simply doesn't give us a skolemization problem (or even a skolem for the first statement):

"Everybody who has a home pays utilities on it," encoded as

$$ \forall x: has(x,skolem1(x)) \wedge home(skolem1(x)) \implies paysUtilsOn(x, skolem1(x)) $$
or $$ \bar has(x, skolem1(x)) \vee ~home(skolem1(x)) \vee paysUtilsOn(x, skolem1(x)) $$ and "Jane and Mark have a home," encoded as $$ has(Jane,skolem2(Jane,Mark)) \wedge has(Mark,skolem2(Jane,Mark)) \wedge home(skolem2(Jane,Mark)) $$ or $$ has(Jane,skolem2(Jane,Mark)) $$ $$ has(Mark,skolem2(Jane,Mark)) $$ $$ home(skolem2(Jane,Mark)) $$

I want to use resolution to prove Jane pays utilities. The problem is that skolem1 has 1 argument and skolem2 has 2 arguments, so they don't unify.

I'm not sure if this matters -- could I just refer to skolem1 and skolem2 and forget the arguments entirely? If it does matter, how do I resolve the problem so that skolem1 and skolem2 can unify, and resolution can work? Not by finding a way around the problem, but by truly using skolemization correctly. The issue is what to do with skolems that have different argument lists if they seem to occur naturally.

by Alpha Ralpha Boulevarde at March 27, 2015 07:01 PM

Fefe

Es war ja irgendwie klar, wenn jemand von der CSU Verkehrsminister ...

Es war ja irgendwie klar, wenn jemand von der CSU Verkehrsminister wird, dass wir dann eine Vorratsdatenspeicherung kriegen würden. Wenn auch nur über den Umweg der Pkw-Maut, "die außer den Christsozialen niemand will".

March 27, 2015 07:00 PM

Die schwedische Außenministerin hat es neulich gewagt, ...

Die schwedische Außenministerin hat es neulich gewagt, Saudi-Arabiens "kulturelle Unterschiede" kritisch zu hinterfragen.
A few weeks ago Margot Wallström, the Swedish foreign minister, denounced the subjugation of women in Saudi Arabia. As the theocratic kingdom prevents women from travelling, conducting official business or marrying without the permission of male guardians, and as girls can be forced into child marriages where they are effectively raped by old men, she was telling no more than the truth. Wallström went on to condemn the Saudi courts for ordering that Raif Badawi receive ten years in prison and 1,000 lashes for setting up a website that championed secularism and free speech. These were ‘mediaeval methods’, she said, and a ‘cruel attempt to silence modern forms of expression’. And once again, who can argue with that?
Spannend ist, was dann passierte. Saudi Arabien hat ihren Botschafter in Schweden zurückgerufen und erteilt Schweden jetzt kein Visum mehr. Die vereinigten arabischen Emirate haben sich dem direkt angeschlossen. Natürlich gab es auch eine Packung Rhetorik dazu:
The Organisation of Islamic Co-operation, which represents 56 Muslim-majority states, accused Sweden of failing to respect the world’s ‘rich and varied ethical standards’
|Ach so ist das. Na dann.

March 27, 2015 07:00 PM

Gute Nachrichten vom "Friedensprozess" in der Türkei: ...

Gute Nachrichten vom "Friedensprozess" in der Türkei: Dem Gesetzentwurf zufolge dürfen Polizisten künftig in bestimmten Situationen auf gewalttätige Demonstranten schießen, ohne selbst angegriffen worden zu sein. Na viel Spaß beim Verfolgen von Killer-Cops dann in Zukunft! Außerdem gibt es mehr Befugnisse bei Durchsuchungen und Festnahmen.

Ich finde ja vor allem auffällig, dass die ARD da das vergiftete Wort "Friedensprozess" benutzt, als wollten sie uns ins Gesicht sagen, dass das bloß verlogene Makulatur ist und zu nichts führen wird.

March 27, 2015 07:00 PM

StackOverflow

postgres in vagrant(ubuntu14.04)

I tried to create simple dev environment with vagrant but fall in problem with postgres.

My Vagrantfile is simple:

# -*- mode: ruby -*-
# vi: set ft=ruby :

# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "ubuntu/trusty64"
  config.vm.network "forwarded_port", guest: 8000, host: 8000
  config.vm.network :public_network

  config.vm.provision "ansible" do |ansible|
    ansible.playbook = "playbook.yml"
  end
end

and I use ansible for provision:

- name: Configure development machine
  hosts: all
  sudo: True
  tasks:
    - name: install postgres
      apt: name={{ item }} update_cache=yes
      with_items:
        - postgresql 
        - postgresql-contrib

but something goes wrong and postgres installs incorrect

When I ssh to VM and I see strange things:

 $ /etc/init.d/postgresql start
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LC_TIME = "uk_UA.UTF-8",
        LC_MONETARY = "uk_UA.UTF-8",
        LC_ADDRESS = "uk_UA.UTF-8",
        LC_TELEPHONE = "uk_UA.UTF-8",
        LC_NAME = "uk_UA.UTF-8",
        LC_MEASUREMENT = "uk_UA.UTF-8",
        LC_IDENTIFICATION = "uk_UA.UTF-8",
        LC_NUMERIC = "uk_UA.UTF-8",
        LC_PAPER = "uk_UA.UTF-8",
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
psql: could not connect to server: No such file or directory
        Is the server running locally and accepting

and there is no /etc/postgresql directory(but /etc/postgresql-common is present) Any thoughts?

Github repo

by kharandziuk at March 27, 2015 06:57 PM

Best practices with Akka in Scala and third-party Java libraries

I need to use memcached Java API in my Scala/Akka code. This API gives you both synchronous and asynchronous methods. The asynchronous ones return java.util.concurrent.Future. There was a question here about dealing with Java Futures in Scala here How do I wrap a java.util.concurrent.Future in an Akka Future?. However in my case I have two options:

  1. Using synchronous API and wrapping blocking code in future and mark blocking:

    Future {
      blocking {
        cache.get(key) //synchronous blocking call
      } 
    }
    
  2. Using asynchronous Java API and do polling every n ms on Java Future to check if the future completed (like described in one of the answers above in the linked question above).

Which one is better? I am leaning towards the first option because polling can dramatically impact response times. Shouldn't blocking { } block prevent from blocking the whole pool?

by Dariusz Biskup at March 27, 2015 06:54 PM

transfering an imperative for-loop into idomatic haskell

I have some difficulties to transfer imperative algorithms into a functional style. The main concept that I cannot wrap my head around is how to fill sequences with values according to their position in the sequence. How would an idomatic solution for the following algorithm look in Haskell?

A = unsigned char[256]
idx <- 1
for(i = 0 to 255)
    if (some_condition(i))
        A[i] <- idx
        idx++
    else
        A[i] = 0;

The algorithm basically creates a lookup table for the mapping function of a histogram.

Do you know any resources which would help me to understand this kind of problem better?

by j.dog at March 27, 2015 06:50 PM

How to establish multiple JxtaBiDiPipes?

I'm new to Jxta and have been working through the examples in the Programmers Guide mostly up to this point. Unfortunately, all the examples in the guide limit themselves to only two peers, use different code for the peers and use hard-coded values that you would not have access to i practice.

I recently got a simple chat program working in which both peers use the same code by randomly taking in turns to act as the server and client, this was inelegant, was limited to two peers still and was still using a predefined pipe ID.

I wanted to expand this to create a true decentralized chat client that could support more than just two peers. The plan up to this point has been for each peer to advertise a JxtaServerPipe, establish a connection whenever one is detected using discoveryService and to spawn a thread each time the serverPipe accepts such a connection.

However, so far I have only been picking up PeerAdvertisements, even when I explicitly publish PipeAdvertisements.

Simplified pseudocode of what I have.

main()
  Start JXTA network
  while(true)
    check_for_adv()
    ServerPipe.accept()
  End while
End main

discoveryEvent(event) //Called when check_for_adv() finds an advertisement
  if(event is an advertisement for a serverPipe)
    create a new JxtaBiDiPipe using the advertisement
  End if
End discoveryEvent

The only* advertisements I receive are PeerAdvertisements, which does not contain a PipeID that I can use to create the BiDiPipe. This is true even when I create and publish a PipeAdvertisement in main(). *: I also receive ModuleImplAdvertisements from the discoveryService of the other peer, which are just ignored.

I feel like I'm missing something obvious, is this a good way of establishing contact between peers? How do you establish pipes between several peers without any foreknowledge and with identical code?

I have been looking for examples but none of the ones I find are doing what I need or are just too complex for me to comprehend. I am a noob when it comes to JXTA, so keep it simple and verbose for me please.

Lastly. Here is the full code. It's written in scala.

object main extends DiscoveryListener 
{
  var manager : NetworkManager = null;
  val connection_threads = new ArrayList[Thread]();
  val connection_remotePipeIDs = new ArrayList[String]();
  val connection_remotePeerIDs = new ArrayList[String]();

  var pg : PeerGroup = null;
  var discovery : DiscoveryService = null;
  var peerID : PeerID = null;
  var peerAdv : PeerAdvertisement = null;
  var pipeID : PipeID = null;
  var pipeAdv : PipeAdvertisement = null;


    def main(args: Array[String]): Unit =
  {

    //---Start JXTA---

    try
    {
      manager = new NetworkManager(NetworkManager.ConfigMode.ADHOC, "PipeTest", new File(new File("cache"), "PipeTest").toURI());

      manager.startNetwork();
    }
    catch
    {
      case e: Throwable =>
      {
        // Some type of error occurred. Print stack trace and quit.
        System.err.println("Fatal error -- Quitting");
        e.printStackTrace(System.err);
        System.exit(-1);
      }
    }

    pg = manager.getNetPeerGroup();
    discovery = pg.getDiscoveryService();
    discovery.addDiscoveryListener(this);
    val pipeservice = pg.getPipeService();




    //---Create the IDs and advertisements for this peer---

    peerID = IDFactory.newPeerID(pg.getPeerGroupID);
    peerAdv = getPeerAdvertisement(peerID);
    pipeID = IDFactory.newPipeID(pg.getPeerGroupID);
    pipeAdv = getPipeAdvertisement(pipeID);



    val timeout = 5*1000; //10 seconds
    while (true) //Loop until something tells otherwise
    {
      //Check for advertisements, responses are handled in DiscoveryEvent
      discovery.getRemoteAdvertisements(null, DiscoveryService.ADV, null, null, 1); //propagate to any, filter type, filter attribute any, filter value any, # of responses per peer, no specific listener
      discovery.getRemoteAdvertisements(null, DiscoveryService.GROUP, null, null, 1); //propagate to any, filter type, filter attribute any, filter value any, # of responses per peer, no specific listener
      discovery.getRemoteAdvertisements(null, DiscoveryService.PEER, null, null, 1); //propagate to any, filter type, filter attribute any, filter value any, # of responses per peer, no specific listener


      val serverPipe : JxtaServerPipe = new JxtaServerPipe(pg, pipeAdv);
      serverPipe.setPipeTimeout(timeout);

      //try connecting by acting as a server for a bit
      var bipipe : JxtaBiDiPipe = null;
      try
      {
        discovery.publish(pipeAdv, timeout, timeout);
        discovery.remotePublish(pipeAdv, timeout);
        bipipe = serverPipe.accept(); //Check if there is a contact.
      }
      catch {
        case all : Throwable => { serverPipe.close(); }
      }

      //Connection found?
      if (bipipe != null)
      {
        //TODO: Implement handler

        //Create a new pipe ID and advertisement since this one is now taken!
        pipeID = IDFactory.newPipeID(pg.getPeerGroupID);
        pipeAdv = getPipeAdvertisement(pipeID);
      }
      else {  } //Timed out, do nothing

      serverPipe.close();

    }//end while



    while (!connection_threads.isEmpty()) //Wait for all connections to close...
    {
      connection_threads.synchronized { connection_threads.wait(); }
    }

    //Then close the network
    manager.stopNetwork();
    out("JXTA Shutdown");

  } //end main()



  def getPipeAdvertisement(ID_String : PipeID) : PipeAdvertisement =
  {
    val advertisement : PipeAdvertisement = AdvertisementFactory.newAdvertisement(PipeAdvertisement.getAdvertisementType()).asInstanceOf[PipeAdvertisement];
    advertisement.setPipeID(ID_String);
    advertisement.setType(PipeService.UnicastType);
    advertisement.setName(peerID.toString());
    return advertisement;
  }



  def getPeerAdvertisement(ID_String : PeerID) : PeerAdvertisement =
  {
    val advertisement : PeerAdvertisement = AdvertisementFactory.newAdvertisement(PeerAdvertisement.getAdvertisementType()).asInstanceOf[PeerAdvertisement];
    advertisement.setPeerID(ID_String);
    //advertisement.setType(PipeService.UnicastType);
    advertisement.setName(peerID.toString());
    return advertisement;
  }



  def discoveryEvent(ev : DiscoveryEvent) : Unit = 
  {
    val res : DiscoveryResponseMsg = ev.getResponse();

    // let's get the responding peer's advertisement
    var adv : Advertisement = null;
    val en : java.util.Enumeration[Advertisement] = res.getAdvertisements();
    if (en != null) {
      while (en.hasMoreElements()) {
        adv = en.nextElement();
        val type_string = adv.getAdvType;

        adv match {
          case adv : PipeAdvertisement => { //Is it a pipe advertisement?
            //Never gets here?!
          }
          case adv : PeerAdvertisement => {

            val remotePeerID = adv.getID()

            if(connection_remotePeerIDs.contains(remotePeerID)) {  } //Already in there, skip.
            else {
              //New thread that creates the pipe...
              val BDPipe : JxtaBiDiPipe = new JxtaBiDiPipe();
              try {
                //Create the pipe
              }
              catch { case t : Throwable => { out("Failed to create pipe!") }}
            }
          }
          case adv : ModuleImplAdvertisement => {  } //Sent by discoveryService en masse, ignore
          case _ => {
            out("Recieved discovery response of unknown type: " + type_string);
          }
        } //End match
      } //End while
    }//End if
  } //End discoveryEvent
} //End object main

by Felix Eriksson at March 27, 2015 06:46 PM

Lobsters

/r/netsec

StackOverflow

Play2-Scala-Reactivemongo Losing Mongo schema flexibility using ReactiveMongo

I'm trying to use Play2 with ReactiveMongo to build my web application. I spent few days reading related documentation and tutorials. In my opinion one of the most powerful features of MongoDB is schema flexibility, that is the possibility of storing in the same collection documents that doesn't have exactly the same structure, but may differ one from another. For example one document may have a field that another doesn't.

Play with ReactiveMongo use case classes to implement models, but case classes obviously have a fixed structure. So all the instances of the class will have the same structure.

Does it represent a loss of flexibility? Or there is a way to implement schema flexibility with ReactiveMongo?

by tano at March 27, 2015 06:31 PM

Can't get path-dependent types to work in scala enumerations

I'm trying to spin my head around the path-dependent types in Scala's enums while making a Reads/Writes for Play2. Here is the code I have so far, it works, but with an asInstanceOf:

implicit def enumerationReads[T <: Enumeration](implicit t: T): Reads[t.Value] = {
    val validationError = ValidationError("error.expected.enum.name", t.values.mkString(", "))
    Reads.of[String].filter(validationError)(s ⇒ t.values.exists(v ⇒ v.toString == s)).map(t.withName(_))
  }

implicit def enumerationValueSetReads[T <: Enumeration](implicit t: T): Reads[t.ValueSet] =
    Reads.seq(enumerationReads[T]).map(seq ⇒ t.ValueSet(seq.asInstanceOf[Seq[t.Value]]: _*))

What can I do to get rid of the asInstanceOf on the last line? I tried typing the enumerationReads as enumerationReads[t.Value], but that doesn't work, the compiler complains in the argument of t.ValueSet that Seq[t.Value] cannot be cast to Seq[t.Value]. Yes, that didn't make sense to me too, until I started to realize these different t's might actually be different, since they are used in a closure.

So, what to do to make my code super-duper asInstanceOf free?

by Dibbeke at March 27, 2015 06:27 PM

Dave Winer

"Radically silo-free"

Over on Facebook and on Twitter I posted a thought, that software could be "radically silo-free."

David Eyes asked what it means. I referred him to my blog, I said "scroll to the bottom and then read up." That's because this idea is so fresh that I hadn't yet written a post explaining. I thought I probably should.

First mention

First, I said I was going to hold up the release of MyWord Editor because I wanted it to be silo-free from the start. Then I spent a week doing that. While that was happening, I made a list of things that would make software silo-free, and I did all of them. I wanted to consciously, mindfully, create something that perfectly illustrated freedom from silos, or came as close as I possibly could to the ideal. In that post I defined the term.

"Silo-free" means you can host your blog, and its editor, on your domain. I may offer a service that runs the software, but there will be no monopoly, and it will be entirely open source, before anything ships publicly.

Second mention

In the post announcing the open source release of MyWorld Editor, I included a section listing the ways it was silo-free, fleshing out the idea from the earlier post. And from that list, comes my definition.

  1. Has an open API. Easily cloned on both sides (that's what open means in this context).

  2. It's a good API, it does more than the app needs. So you're not limited by the functionality of MyWord Editor. You can make something from this API that is very different, new, innovative, ground-breaking.

  3. You get the source, under a liberal license. No secrets. You don't have to reinvent anything I've done.

  4. Users can download all their data, with a simple command. No permission needed, no complex hoops to jump through. Choose a command, there's your data. Copy, paste, you've just moved.

  5. Supports RSS 2.0. That means your ideas can plug into other systems, not just mine.

There may be other ways to be silo-free. Share your ideas.

Why it's "radical"

In that post I explained why the software was "radical".

These days blogging tools try to lock you into their business model, and lock other developers out. I have the freedom to do what I want, so I decided to take the exact opposite approach. I don't want to lock people in and make them dependent on me. Instead, I want to learn from thinkers and writers and developers. I want to engage with other minds. Making money, at this stage of my career, is not so interesting to me. I'd much rather make ideas, and new working relationships, and friends.

I guess you could say I believe there are other reasons to make software, other than making money. Some people like to drive racecars when they get rich. What I want is to drive my own Internet, and for you to drive yours too.

March 27, 2015 06:21 PM

StackOverflow

How to build a Play2 project using play-yeoman from IntelliJ IDEA

Now I am reading a code of following project. I am a newbie of Play2(w/ Scala).

https://github.com/mohiva/play-silhouette-angular-seed#master

When I built it with instruction written in above link, it worked fine.(It was on a terminal). But when I opened that project with IntelliJ IDEA, I got following error.

[info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000
'force' not enabled
Will run: [grunt, --gruntfile=Gruntfile.js, watch] in /Users/chabashilah/
/prv/dev/test/test-play-silhouette-angular-seed/ui
java.io.IOException: Cannot run program "grunt" (in directory "/Users/chabashilah/
/prv/dev/test/test-play-silhouette-angular-seed/ui"): error=2, No such file or directory
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
Caused by: java.io.IOException: error=2, No such file or directory
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:248)
    at java.lang.ProcessImpl.start(ProcessImpl.java:134)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
[trace] Stack trace suppressed: run 'last compile:run' for the full output.
[error] (compile:run) java.io.IOException: Cannot run program "grunt" (in directory "/Users/chabashilah/
/prv/dev/test/test-play-silhouette-angular-seed/ui"): error=2, No such file or directory
[error] Total time: 1 s, completed 2015/03/28 0:36:40
1. Waiting for source changes... (press enter to interrupt)

I guess this is a small configuration mistake but couldn't find why it happens. How can I make it build?

Thanks.

by chabashilah at March 27, 2015 06:20 PM

How can I persist an ansible variable across ansible roles?

I've registered a variable in a play.

---
- hosts: 127.0.0.1
  gather_facts: no
  connection: local
  sudo: no
  vars_files:
    - vars.yml
  tasks:
    - name: build load balancer
      os_load_balancer: net=mc_net ext_net=vlan3320 name=load_balancer protocol=HTTPS port=80
      register: my_lb

I can access that varialbe fine, until I make the request inside a role.

For example, in a separate role in the same run, I want to access that registered variable:

- debug: var=my_lb

I get the following output:

{'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'my_lb' is undefined", 'failed': True}

How can I access variables registered in a separate role, within the same play?

Edit for clarity of how things piece together:

Top Play
-includes: 
  - Sub play 1
    - registers variable foo
  - Sub play 2
    -includes:
      - sub play A
        - role 1 
        - role 2
        - role 3 
          - references variable foo in template
      - Sub play B
  - Sub play 3

by J0hnG4lt at March 27, 2015 06:20 PM

/r/scala

Single threaded laptop implementation beating a 128 node GraphX cluster on a 1TB data set (128 billion nodes) - What is a use case for GraphX then? when is it worth the cost?

Remember that article that went viral on HN? (Where a guy showed how GraphX / Giraph / GraphLab / Spark have worse performance on a 128 cluster than on a 1 thread machine? if not here is the article - http://www.frankmcsherry.org/graph/scalability/cost/2015/01/15/COST.html)

Well, this stirred up a lot of commotion in the big data community (and Spark/GraphX in particular) People (justly I guess) blamed him for not really having “big data”, as all of his data set fits in memory, so it doesn’t really count.

So he took the challenge and came with a pretty unarguable counter benchmark, now with a huge data set (1TB of data, encoded using Hilbert curves to 154GB, but still large). see at - http://www.frankmcsherry.org/graph/scalability/cost/2015/02/04/COST2.html

He provided the source here https://github.com/frankmcsherry/COST as an example

So, what is the counter argument? it pretty much seems like a blow in the face of Spark / GraphX etc, (which I like and use on a daily basis)

Before I dive into revalidating his benchmarks with my own use cases. What is your opinion on this? If this is the case, then what IS the use case for using Spark/GraphX at all?

submitted by eranation
[link] [9 comments]

March 27, 2015 06:20 PM

Planet Clojure

Clojure Reactive Programming has been published

I'm extremely happy to let everyone know my book, Clojure Reactive Programming, has finally been published!

You can get it at the publisher's website or on Amazon. I had a great time writing it and I truly hope you find it useful!

I've met a few authors here and there and I heard more than once that a book is never really finished. I now know what they mean.

The book doesn't cover everything I wanted to write about due to time and space limitations. Having said that, now that the book is out I do plan to expand on a few things using this blog.

Stay tuned!

Thanks to everyone who gave me feedback on early drafts of the book! :)

by Leonardo Borges at March 27, 2015 06:10 PM

/r/netsec

StackOverflow

Ansible playbook shell output

I would like to quickly monitor some hosts using commands like ps,dstat etc using ansible-playbook. The ansible command itself perfectly does what I want, for instance I'd use:

ansible -m shell -a "ps -eo pcpu,user,args | sort -r -k1 | head -n5"

and it nicely prints all std output for every host like this:

localhost | success | rc=0 >>
0.0 root     /sbin/init
0.0 root     [kthreadd]
0.0 root     [ksoftirqd/0]
0.0 root     [migration/0]

otherhost | success | rc=0 >>
0.0 root     /sbin/init
0.0 root     [kthreadd]
0.0 root     [ksoftirqd/0]
0.0 root     [migration/0] 

However this requires me to keep a bunch of shell scripts around for every task which is not very 'ansible' so I put this in a playbook:

---
-
  hosts: all
  gather_facts: no
  tasks:
    - shell: ps -eo pcpu,user,args | sort -r -k1 | head -n5

and run it with -vv, but the output baiscally shows the dictionary content and newlines are not printed as such so this results in an unreadable mess like this:

changed: [localhost] => {"changed": true, "cmd": "ps -eo pcpu,user,args | sort -r -k1 
head -n5 ", "delta": "0:00:00.015337", "end": "2013-12-13 10:57:25.680708", "rc": 0,
"start": "2013-12-13 10:57:25.665371", "stderr": "", "stdout": "47.3 xxx    Xvnc4 :24
-desktop xxx:24 (xxx) -auth /home/xxx/.Xauthority -geometry 1920x1200\n
.... 

I also tried adding register: var and the a 'debug' task to show {{ var.stdout }} but the result is of course the same.

Is there a way to get nicely formatted output from a command's stdout/stderr when run via a playbook? I can think of a number of possible ways (format output using sed? redirect output to file on the host then get that file back and echo it to the screen?), but with my limited knowledge of the shell/ansible it would take me a day to just try it out.

by stijn at March 27, 2015 06:05 PM

Unexpected Result when Overriding 'val'

In Scala 2.10.4, Given the following class:

scala> class Foo { 
     |   val x = true
     |   val f = if (x) 100 else 200
     | }
defined class Foo

The following two examples make sense to me:

scala> new Foo {}.f
 res0: Int = 100

scala> new Foo { override val x = false}.f
res1: Int = 200

But, why doesn't this call return 100?

scala> new Foo { override val x = true }.f
res2: Int = 200

by Kevin Meredith at March 27, 2015 06:04 PM

Fefe

Liebe Leser, ich habe hier eine schlechte Idee, die ...

Liebe Leser, ich habe hier eine schlechte Idee, die ist so schlecht, dass im ewigen Wettrennen der schlechten Ideen selbst innerhalb des CDU-Segments vordere Plätze belegt. Ich würde sie sogar als Favorit dieses Jahrzehnts einschätzen. Ich kann mich gar nicht erinnern, jemals eine so schlechte Idee gehört zu haben.

Also, hier ist die schlechteste Idee des Jahrzehnts:

+++ 13.49 Uhr: Der CDU-Verkehrspolitiker Thomas Jarzombek hat eine bessere internetbasierte Flugzeug-Bodenkontrolle gefordert
Ihr ahnt sicher schon, was jetzt kommen wird.
"Wenn Flugzeuge künftig mit Internet ausgerüstet sind, sollten wir einen Kommunikationskanal zur Bodenkontrolle außerhalb des Cockpits einrichten", sagte Jarzombek unserer Redaktion. "Die Bodenkontrolle sollte durch Internet-Kameras ins Flugzeug schauen können"
Ja super! Aber wartet, das ist noch nicht die schlechteste Idee des Jahrzehnts. Das ist nur das Vorspiel. JETZT kommt die schlechtes Idee des Jahrzehnts:
"Auch die Bodenkontrolle muss künftig in der Lage sein, Cockpit-Türen über das Internet von außen zu öffnen", sagte Jarzombek.
AU JA!!! Und die Security davon überlassen wir am besten den Spezialexperten, die auch das Sicherheitskonzept von De-Mail entworfen haben! Oder vielleicht Email made in Germany?

Update: Oh nein, warte, viel besser! Der Jarzombek hat ein "Unternehmen für IT-Dienstleistungen" gegründet! Die könnten das doch machen!1!!

March 27, 2015 06:01 PM

DataTau

StackOverflow

Scala: What is the difference between (a: String) and (a: => String) for argument?

What is the difference between these two method?

def method1(msg: => String) = print(msg)
def method2(msg: String) = print(msg)

I can invoke them both.

  method1("1")
  method2("2") 

And they print 12

by barbara at March 27, 2015 05:56 PM

Fred Wilson

Feature Friday: Archives of Live Broadcasts

I wrote about the live broadcasting craze earlier this week. There are three significant players in this market, YouNow, Twitter/Periscope, and Meerkat. I’m a shareholder in two of them (YouNow is a USV portfolio company and we own a lot of Twitter stock personally). So I’ve been quite interested to see how this market is shaping up and I’ve been using all three apps this week.

I should say that I don’t see myself as a broadcaster. That may change. But I honestly don’t know what parts of my day are interesting enough to broadcast and would be appropriate to broadcast. I’m sure the USV monday meeting would be interesting to broadcast but it would not be fair to all the companies we talk about in that meeting confidentially to broadcast that. I’m sure the SoundCloud board meeting would be interesting to broadcast but I’m equally sure the company would be mortified that I would even dare to think of such a thing. I know that I will get some suggestions in the comments and if any are good, I will reconsider the “I’m not a broadcaster” attitude I have right now.

I did accidentally broadcast two seconds on Meerkat this morning.


That happened because I accidentally pushed a button and went live without realizing it (and tweet spammed almost 400,000 followers) to my great annoyance. That’s a UX fail as far as I’m concerned and I’m not sure I’m going to open that app again.

But I do see myself as a consumer of these broadcasts. We’ve been an investor in YouNow for something like three years and I’ve spent time watching broadcasts on YouNow. It’s a classic Internet content marketplace. There’s brilliance right next to silliness. But when you catch something brilliant on YouNow, it’s kind of magical. Tyler Oakley did a YouNow last night that had 120,000 viewers and he raised $20,000 for his Prizeo challenge during his live broadcast. You can watch Tyler’s broadcast via YouNow’s archive mode.

Which leads me to my feature friday topic – archives of live broadcasts. I’m getting real time mobile notifications on my phone from Periscope and YouNow and Meerkat and I’m also seeing invitations to join these live broadcasts in my Twitter feed. But I’m pretty busy during the day when all of these broadcasts are happening. I realize there’s value in watching live (the chat, the engagement, the favoriting, etc) but honestly I can’t tune in live very often.

What I’d like to be able to do, ideally right from my mobile notifications or the tweet, is to favorite or mark to watch later (I use the favorite button on many platforms as my “read later” button).

Twitter’s Periscope also has archives. I snapped this screenshot today from my Periscope app.

periscopoe

I watched my friend Howard”s broadcasts via this archive screen this morning, further confirming that I (and Howard too) are not interesting enough to be broadcasters :)

But regardless of whether or not that particular archived broadcast was any good, I think ironically archives are an important part of the livestreaming experience and I think the leading apps should support this functionality if they want to reach the broadest user base.

by Fred Wilson at March 27, 2015 05:54 PM

CompsciOverflow

What is a student process?

According to Galvin and Silberschatz, 5 queues are maintained in multilevel queue scheduling, each for:

  • System Process
  • Interactive Process
  • Interactive Editing Process
  • Batch process
  • Student Process

where System process has highest priority and Student process has lowest priority.

What is meant by Student Process?

Also, I only have a vague idea about the rest except system process. If possible, elucidate them.

by MAKZ at March 27, 2015 05:48 PM

StackOverflow

Scala string splitting in decending amounts of char

Given a string such as

val s = (0 to 9).mkString
s: String = 0123456789

striving to find a functional (neat) approach to get an Array[String] like this

Array("0123", "456", "78", "9")

Using substring on pre-calculated indexes proves quite messy.

Update The array size n and the string length l are related, always, by

val l = n*(n+1)/2

Put another way, input strings for n = 1,2,... have length 1,3,6,10,15,... Thus as noted by @m-z a string like 0123456789a has no solution.

by elm at March 27, 2015 05:48 PM

/r/netsec

Lobsters

Open Source (Almost) Everything (2011)

Since we don’t have ‘open-source’ tag, I submitted under ‘programming’.

Comments

by av at March 27, 2015 05:39 PM

StackOverflow

Why does systemd service running Node.js app show state of failed when stopped properly?

I have a systemd service file that is running a Node.js app. The service seems to run fine, but when I stop the service, using systemctl stop train the service enters a failed state.

[root@localhost portaj]# systemctl stop train
[root@localhost portaj]# systemctl status train
train.service - Train Service
   Loaded: loaded (/etc/systemd/system/train.service; enabled)
   Active: failed (Result: exit-code) since Mon 2014-08-04 04:40:17 UTC; 1s ago
  Process: 25706 ExecStart=/opt/node/bin/node /opt/train/src/server/start.js (code=exited, status=143)
 Main PID: 25706 (code=exited, status=143)

Aug 04 04:33:39 localhost.localdomain train[25706]: Train Server listening on port 3000
Aug 04 04:40:17 localhost.localdomain systemd[1]: Stopping Train Service...
Aug 04 04:40:17 localhost.localdomain systemd[1]: train.service: main process exited, code=exit.../a
Aug 04 04:40:17 localhost.localdomain systemd[1]: Stopped Train Service.
Aug 04 04:40:17 localhost.localdomain systemd[1]: Unit train.service entered failed state.

My service file looks like this:

[Unit]
Description=Train Service
After=network.target

[Service]
Environment=PORT=4000
Environment=NODE_ENV=development
ExecStart=/opt/node/bin/node /opt/train/src/server/start.js
Restart=on-failure
SyslogIdentifier=train

[Install]
WantedBy=network.target

I suspect that the Node.js app is returning a status code that systemd thinks is a failure. I am not sure if I need to make my Node.js app return a different status code, if that's possible. Or, if I need to modify the systemd service file to somehow act differently.

In case it will help, the deployments scripts are here: https://github.com/JonathanPorta/ansible-train

The actual Node.js app is here: https://github.com/JonathanPorta/train

Thanks in advance for the help!

by Jonathan at March 27, 2015 05:37 PM

CompsciOverflow

Example of connected graph with a cut where a light edge is not on a minimum spanning tree

Can someone help me with this question?

Give a simple example of a connected graph in which there exists a cut and a light edge crossing the cut does not belong to a minimum spanning tree.

by PTNPNX at March 27, 2015 05:35 PM

Lobsters

8 ways to report errors in Haskell

In the GMane discussion linked, click the subject to see the thread of replies.

Comments

by pushcx at March 27, 2015 05:25 PM

StackOverflow

How do I get logs/details of ansible-playbook module executions?

Say I execute the following.

$ cat test.sh
#!/bin/bash
echo Hello World
exit 0

$ cat Hello.yml
---

- hosts: MyTestHost
  tasks:
  - name: Hello yourself
    script: test.sh


$ ansible-playbook  Hello.yml

PLAY [MyTestHost] ****************************************************************

GATHERING FACTS ***************************************************************
ok: [MyTestHost]

TASK: [Hello yourself] ********************************************************
ok: [MyTestHost]

PLAY RECAP ********************************************************************
MyTestHost                    : ok=2    changed=0    unreachable=0    failed=0

$

I know for sure that it was successful.

Where/how do I see the "Hello World" echo'ed/printed by my script on the remote host (MyTestHost)? Or the return/exit code of script?

My research shows me it would be possible to write a plugin to intercept module execution callbacks or something on those lines and write a log file. I would prefer to not waste my time with that.

E.g. something like the stdout in below (note that I'm running ansible and not ansible-playbook):

$ ansible plabb54 -i /project/plab/svn/plab-maintenance/ansible/plab_hosts.txt -m script -a ./test.sh
plabb54 | success >> {
    "rc": 0,
    "stderr": "",
    "stdout": "Hello World\n"
}

$

by Kashyap at March 27, 2015 05:19 PM

Lobsters

StackOverflow

Performance issue while finding min and max with functional approach

I have an array of subviews and I want to find the lowest tag and the highest tag (~ min and max). I tried to play with the functional approach of Swift and optimized it as much as my knowledge allowed me, but when I do this:

let startVals = (min:Int.max, max:Int.min)
var minMax:(min: Int, max: Int) = subviews.filter({$0 is T2GCell}).reduce(startVals) {
        (min($0.min, $1.tag), max($0.max, $1.tag))
}

I still get worse performance (approximately 10x slower) than good ol' for cycle:

var lowest2 = Int.max
var highest2 = Int.min
for view in subviews {
    if let cell = view as? T2GCell {
        lowest2 = lowest2 > cell.tag ? cell.tag : lowest2
        highest2 = highest2 < cell.tag ? cell.tag : highest2
    }
}

To be totally precise I am also including snippet of the measuring code. Note that the "after-recalculations" for human readable times is done outside of any measurement:

let startDate: NSDate = NSDate()

// code

let endDate: NSDate = NSDate()

// outside of measuring block
let dateComponents: NSDateComponents = NSCalendar(calendarIdentifier: NSCalendarIdentifierGregorian)!.components(NSCalendarUnit.CalendarUnitNanosecond, fromDate: startDate, toDate: endDate, options: NSCalendarOptions(0))
let time = Double(Double(dateComponents.nanosecond) / 1000000.0)

My question is - am I doing it wrong, or this use case is simply not suitable for functional approach?


EDIT

This is is 2x slower:

var extremes = reduce(lazy(subviews).map({$0.tag}), startValues) {
    (min($0.lowest, $1), max($0.highest, $1))
}

And this is only 20% slower:

var extremes2 = reduce(lazy(subviews), startValues) {
    (min($0.lowest, $1.tag), max($0.highest, $1.tag))
}

Narrowed and squeezed down to very nice performance times, but still not as fast as the for cycle.


EDIT 2

I noticed I left out the filter in previous edits. When added:

var extremes3 = reduce(lazy(subviews).filter({$0 is T2GCell}), startValues) {
    (min($0.lowest, $1.tag), max($0.highest, $1.tag))
}

I'm back to 2x slower performance.

by Michal at March 27, 2015 05:06 PM

Ansible notify handlers in another role

Can I notify the handler in another role? What should I do to make ansible find it?

The use case is, e.g. I want to configure some service and then restart it if changed. Different OS have probably different files to edit and even the file format can be different. So I would like to put them into different roles (because the file format can be different, it can't be done by setting group_vars). But the way to restart the service is the same, using service module; so I'd like to put the handler to common role.

Is anyway to achieve this? Thanks.

by icando at March 27, 2015 04:52 PM

QuantOverflow

How can I do a dynamic GARCH model using extended Kalman filter in R?

Today I was reading an article quoted here, in this article is proposed an adaptive (dynamic) Garch model. How can I do it in R? The use of extended Kalman filter or particle filter is indifferent. I usually use rugarch or rmgarch for Garch models. Can I do it with this packages or I need some others? Thanks

by Mik_79 at March 27, 2015 04:51 PM

StackOverflow

How to add Friend auth to Chestnut template

I've struggled for a few days trying to get a simple use of the security library, Friend, to work with the Chestnut clj/cljs template.

A POST request to the the /login uri is supposed log me in and allow access the protected routes like /role-user. But for some reason I am not able to login, the POST returns a 303 and routes me back to the root page.

I added the Friend middleware inside the http-handler function. Is this the correct place to apply this sort of middleware? I thought maybe the reload or api-defaults middleware could be messing up friend middleware? However, removing them does not fix things.

(def http-handler
  (if is-dev?
    (-> #'routes
        (reload/wrap-reload)
        (friend/authenticate
          {:allow-anon? true
           :login-uri "/login"
           :default-landing-uri "/"
           :unauthorized-handler #(-> (h/html5 [:h2 "You do not have sufficient privileges to access " (:uri %)])
                                      resp/response
                                      (resp/status 401))
           :credential-fn (fn [x]
                            (let [res (creds/bcrypt-credential-fn @users x)]
                              (log/info x)
                              (log/info res)
                              res))
           :workflows [(workflows/interactive-form)]})
        (wrap-defaults api-defaults))
    (wrap-defaults routes api-defaults)))

Based on the print statements I was able to figure out that the credential-fn function does get called on the POST request, with the correct params, and the function returns the correct (authenticated) result.

This http-handler is used as such

(defn run-web-server [& [port]]
  (let [port (Integer. (or port (env :port) 10555))]
    (print "Starting web server on port" port ".\n")
    (run-jetty http-handler {:port port :join? false})))

(defn run-auto-reload [& [port]]
  (auto-reload *ns*)
  (start-figwheel))

(defn run [& [port]]
  (when is-dev?
    (run-auto-reload))
  (run-web-server port))

For what it's worth, here are my routes.

(defroutes routes
  (GET "/" req
    (h/html5
      misc/pretty-head
      (misc/pretty-body
       (misc/github-link req)
       [:h3 "Current Status " [:small "(this will change when you log in/out)"]]
       [:p (if-let [identity (friend/identity req)]
             (apply str "Logged in, with these roles: "
               (-> identity friend/current-authentication :roles))
             "anonymous user")]
       login-form
       [:h3 "Authorization demos"]
       [:ul
        [:li (e/link-to (misc/context-uri req "role-user") "Requires the `user` role")]]
       [:p (e/link-to (misc/context-uri req "logout") "Click here to log out") "."])))
  (GET "/login" req
    (h/html5 misc/pretty-head (misc/pretty-body login-form)))
  (GET "/logout" req
    (friend/logout* (resp/redirect (str (:context req) "/"))))
  (GET "/role-user" req
    (friend/authorize #{::users/user} "You're a user!")))

by currentoor at March 27, 2015 04:46 PM

Cannot Diagnose Neo4j REST API Inquiry Failure

I'm messing with some code in Scala (2.9.2) to query a neo4j (2.2.0) server using REST API via Jersey (1.19) and Cypher. My first transaction succeeds (HTTP 200) returning some nice XHTML. The second more involved transaction containing a valid Cypher query returns 405 with no data or message. It causes a stack dump in the server log. The same query succeeds via the console to the same server. I don't know how to proceed to diagnose the problem. Details follow:

Here is the transaction and result via Cypher console:

neo4j-sh (?)$ match (p)-[r]->(n) where n.number = '15' return p,r,n;
+------------------------------------------------------------------------------------------------------------+
| p                                        | r                                     | n                       |
+------------------------------------------------------------------------------------------------------------+
| Node[1451]{name:"Elizabeth Maher Muoio"} | :REPRESENTS[1725]{inHouse:"Assembly"} | Node[1381]{number:"15"} |
| Node[1450]{name:"Reed Gusciora"}         | :REPRESENTS[1724]{inHouse:"Assembly"} | Node[1381]{number:"15"} |
| Node[1449]{name:"Shirley K. Turner"}     | :REPRESENTS[1723]{inHouse:"Senate"}   | Node[1381]{number:"15"} |
+------------------------------------------------------------------------------------------------------------+
3 rows
24 ms

Here is the Scala snippet:

val n4url = "http://dev.cosi.com:7474/db/data/"

// Make a client...
val c = Client.create // Uses jersey 1.19
//val cl = c.getClass.getName;     println("cl: "+cl);
c.addFilter(new HTTPBasicAuthFilter("neo4j","connected"))

// Dumb test of comms w server...
val resource = c.resource(n4url)
val response = resource.get(classOf[ClientResponse])
printf("\nGET on [%s], status code [%d]\n%s\n",n4url,response.getStatus(),
  response.getEntity(classOf[String]))
response.close

println("\n=== MOVING ALONG TO REAL INQUIRY ===")
val inq = n4url+"transaction/commit"
val res2 = c.resource(inq)
val query = "match (p)-[r]->(n) where n.number = '15' return p,r,n"
val payload = "{\"statements\" : [ {\"statement\" : \"" + query + "\"} ]}";
val resp2 = resource.
  accept(MediaType.APPLICATION_JSON).
  `type`(MediaType.APPLICATION_JSON).
  entity(payload).
  post(classOf[ClientResponse])
printf("POST [%s] to [%s], status code [%d], returned data: " +
  System.getProperty("line.separator") + "%s\n",
  payload,inq,resp2.getStatus,resp2.getEntity(classOf[String]))
resp2.close

Here is the output of the Scala program:

    GET on [http://dev.cosi.com:7474/db/data/], status code [200]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html><head><title>Root</title><meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<link href='http://resthtml.neo4j.org/style/rest.css' rel='stylesheet' type='text/css'>
<script type='text/javascript' src='/webadmin/htmlbrowse.js'></script>
</head>
<body onload='javascript:neo4jHtmlBrowse.start();' id='root'>
<div id='content'><div id='header'><h1><a title='Neo4j REST interface' href='/'><span>Neo4j REST interface</span></a></h1></div>
<div id='page-body'>
<table class="root"><caption>Root</caption>
<tr class='odd'><th>relationship_index</th><td><a href="http://dev.cosi.com:7474/db/data/index/relationship">http://dev.cosi.com:7474/db/data/index/relationship</a></td></tr>
<tr><th>node_index</th><td><a href="http://dev.cosi.com:7474/db/data/index/node">http://dev.cosi.com:7474/db/data/index/node</a></td></tr>
</table>
<div class='break'>&nbsp;</div></div></div></body></html>

=== MOVING ALONG TO REAL INQUIRY ===
POST [{"statements" : [ {"statement" : "match (p)-[r]->(n) where n.number = '15' return p,r,n"} ]}] to [http://dev.cosi.com:7474/db/data/transaction/commit], status code [405], returned data: 

Here is the server log fragment:

   FINE: Mapped exception to response: 405
javax.ws.rs.WebApplicationException
        at com.sun.jersey.server.impl.uri.rules.TerminatingRule.accept(TerminatingRule.java:66)
        at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
        at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
        at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
        at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
        at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
        at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:540)
        at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:715)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
        at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:800)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669)
        at org.neo4j.server.rest.dbms.AuthorizationFilter.doFilter(AuthorizationFilter.java:120)
        at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
        at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
        at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
        at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125)
        at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
        at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059)
        at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
        at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
        at org.eclipse.jetty.server.Server.handle(Server.java:497)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
        at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:248)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:620)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:540)
        at java.lang.Thread.run(Thread.java:745)

by Bill Michaelson at March 27, 2015 04:43 PM

Functional Programming Beginner : Currying in Java

I was reading about currying in functional-programming, and I have a very basic question:

If I have two functions in Java

 int add(int x, int y){
     return x+y;
}

and I create another method

  int increment(int y){
       return add(1, y);
   }

In the above code, when I wrote increment function, did I actually curry add ?

by user2434 at March 27, 2015 04:43 PM

/r/compsci

StackOverflow

Ansible: how to construct a variable from another variable and then fetch it's value

Here is my problem I need to use one variable 'target_host' and then append '_host' to it's value to get another variable name whose value I need. If you look at my playbook. Task nbr 1,2,3 fetch the value of variable however nbr 4 is not able to do what I expect. Is there any other way to achieve the same in ansible?

   ---
    - name: "Play to for dynamic groups"
      hosts: local 
      vars:
        - target_host: smtp
        - smtp_host: smtp.max.com
      tasks:
        - name: testing
          debug: msg={{ target_host }}
        - name: testing
          debug: msg={{ smtp_host }}
        - name: testing
          debug: msg={{ target_host }}_host
        - name: testing
          debug: msg={{ {{ target_host }}_host }}


Output:

TASK: [testing] *************************************************************** 
ok: [127.0.0.1] => {
    "msg": "smtp"
}

TASK: [testing] *************************************************************** 
ok: [127.0.0.1] => {
    "msg": "smtp.max.com"
}

TASK: [testing] *************************************************************** 
ok: [127.0.0.1] => {
    "msg": "smtp_host"
}

TASK: [testing] *************************************************************** 
ok: [127.0.0.1] => {
    "msg": "{{{{target_host}}_host}}"
}

by Max at March 27, 2015 04:35 PM

CompsciOverflow

Build-Max-Heap vs. HeapSort

I'm not sure whether my definition for these 2 terms are correct. Hence, could you help me verify that:

HeapSort: A procedure which sorts an array in place.

Build-Max-Heap: A procedure which runs in linear time, produces a max- heap from an unordered input array.

Is worst-case input = worst-case running time?

If so, given the size n for Build-Max-Heap, would its worst case input be the same as the HeapSort which is $\mathcal{O}(n \log n)$?

by iterence at March 27, 2015 04:35 PM

/r/compilers

Writing a compiler in Rust, D or Go?

Hi all. I'm trying to decide which language to use to write a compiler for what will probably be a subset of C.

I have Rust, Go and D in mind for this. I'm from a C/C++ background, and I've already written interpreters and compilers in C++, so I want to try something new.

I know that D is the most mature. However, it reminds me too much of C++, so I'd like to steer clear of that for now.

Go has been getting a lot of flak lately but it appears to me that it has some great tooling and a great standard library so it shouldn't be that bad.

Rust is a nice language but always changing. I was thinking of writing it in more than one.

Any ideas/suggestions? Thanks guys.

submitted by __cplusplus
[link] [12 comments]

March 27, 2015 04:24 PM

StackOverflow

yum install python-setuptools to install easy_install and ansible - errors: AttributeError: other Python Errors

Goal: Install ansible on a RedHat Linux machine.

Little overview on how it all started: When my Linux machine was RedHat 5.9 (Tikanga), the default python installed version was 2.4. I tried my best, but couldn't get anything to work as Ansible requires python >= 2.6. I tried installing 2.7.9 on Linux 5.9 version but then things started to act up really fast.

I did try 2.7.9 python on Linux 5.9 as "make altinstall" instead of install but still there were lots of errors while running yum / etc system level commands.

Few errors which came there were (with or without running sudo):

# sudo pip install ansible

Traceback (most recent call last):
  File "/usr/bin/pip", line 7, in ?
    sys.exit(
  File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 236, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2097, in load_entry_point
    return ep.load()
  File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1830, in load
    entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File "/usr/lib/python2.4/site-packages/pip-6.0.8-py2.4.egg/pip/__init__.py", line 211
    except PipError as exc:
                     ^
SyntaxError: invalid syntax

or

# sudo easy_install pip

Searching for pip
Best match: pip 6.0.8
Processing pip-6.0.8-py2.4.egg
pip 6.0.8 is already the active version in easy-install.pth
Installing pip script to /usr/bin
Installing pip2 script to /usr/bin
Installing pip2.4 script to /usr/bin

Using /usr/lib/python2.4/site-packages/pip-6.0.8-py2.4.egg
Processing dependencies for pip

or

# sudo pip install ansible

Traceback (most recent call last):
  File "/usr/bin/pip", line 7, in ?
    sys.exit(
  File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 236, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 2097, in load_entry_point
    return ep.load()
  File "/usr/lib/python2.4/site-packages/pkg_resources.py", line 1830, in load
    entry = __import__(self.module_name, globals(),globals(), ['__name__'])
  File "/usr/lib/python2.4/site-packages/pip-6.0.8-py2.4.egg/pip/__init__.py", line 211
    except PipError as exc:
                     ^
SyntaxError: invalid syntax

or

# sudo easy_install ansible

'import site' failed; use -v for traceback
Traceback (most recent call last):
  File "/usr/bin/easy_install", line 5, in ?
    from pkg_resources import load_entry_point
ImportError: No module named pkg_resources

etc....

Finally to my luck, I thought, let's try installing python again from scratch (so I ran yum erase python, !!! beware !!!!) and to my knowledge, it was the best command I ever ran in my experience with a little oversight. End result: I ended up creating a new product, here: http://www.keepcalmandcarryon.com/creator/?shortcode=qCsMlpyc

Anyways, ... Now, I got the server revived with a newer version of RedHat (version 6.6 Santiago) and this time default Python on it was: 2.6.6.



Current situation: THIS is now, what I'm facing now on RH Linux 5.9 with Python 2.6.6 installed.

I'm running: sudo easy_install pip but I got an error:

sudo: easy_install: command not found

To resolve the above, I'm now running: sudo yum install python-setuptools It found it... but showing me some an error.

Loaded plugins: product-id, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Setting up Install Process
http://74.125.194.100/yum/x86_64/6Server/%24YUM0/Server/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
http://74.125.194.100/yum/x86_64/supplemental/%24YUM0/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.
Resolving Dependencies
--> Running transaction check
---> Package python-setuptools.noarch 0:0.6.10-3.el6 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================================================================================
 Package                                                   Arch                                           Version                                                Repository                                              Size
==============================================================================================================================================================================================================================
Installing:
 python-setuptools                                         noarch                                         0.6.10-3.el6                                           release.update                                         336 k

Transaction Summary
==============================================================================================================================================================================================================================
Install       1 Package(s)

Total download size: 336 k
Installed size: 1.5 M
Is this ok [y/N]: y
Downloading Packages:
http://74.125.194.100/yum/x86_64/6Server/%24YUM0/Server/../Packages/python-setuptools-0.6.10-3.el6.noarch.rpm: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not Found"
Trying other mirror.


Error Downloading Packages:
  python-setuptools-0.6.10-3.el6.noarch: failure: ../Packages/python-setuptools-0.6.10-3.el6.noarch.rpm from release.update: [Errno 256] No more mirrors to try.

-bash-4.1$

Any idea, how can i get easy_install, pip or ansible on my Linux machine 6.6 (now).

Thanks.

by Arun Sangal at March 27, 2015 04:18 PM

Concatenating a list of strings

Is there a built in method to take all of the strings in a list of strings and concatenate them in Scala? If not, how would I do this?

by ggreeppy at March 27, 2015 04:15 PM

QuantOverflow

Why is Brownian motion merely 'almost surely' continuous?

Why is Brownian motion required to be merely almost surely continuous instead of continuous?

For example, this is stated as condition 2 in this article in section 1, Characterizations of the Wiener process, where it says "The function $t \rightarrow W_t$ is almost surely everywhere continuous." What is an example of a Brownian motion where there is a point at which the motion is not continuous?

by user50229 at March 27, 2015 04:11 PM

/r/netsec

TheoryOverflow

Some questions about the paper, "Hypercontractivity, Sum-of-squares Proofs and Their Applications"

I am referring to this famous paper, http://arxiv.org/abs/1205.4484

At the top of page 42, the authors write an equation like $f=Gg$. Seems that is the same as what they say in the next line that, $f(x) = \mathbb{E}_{(y,x) \in E(G)}[ g(y)]$ (i.e f(x) is the expectation of g over the neighbours of x)

  • (1) On page 40 in the proof of Lemma 8.2, they had defined a $f = \sum_i \alpha_i \chi_i$ (with the coefficients $\alpha_i$ chosen such that $f$ is forced to have expectation 2-norm 1) and $g = \sum_i (\alpha_i)/(\lambda_i) \chi_i$ (where the $i$ sums over eigenvalues of value $\geq \lambda$)

Are they saying that this particular $f$ and $g$ above satisfy this condition of being $f = Gg$?

If yes, then how?

  • (2) On page 41, in the proof of Lemma 8.4, they argue that in the first case (i) that the maximum probability of any event in this new distribution D' is 1/N and (ii) that this somehow implies that this D' is a convex combination of flat distributions over some sets.

Could someone kindly elaborate on this point a bit more? Like what is the intuitive understanding behind these special sets "T" (the support of these flat distributions) that get defined eventually in Claim 8.3?

  • (3) In the first point on the list on page 42, in quest of proving Claim 8.3, they apply Lemma 8.4 to the graph assuming that the set $S$ of 8.3 can be the sets $U_i$ defined at the end of page $40$ and the role of $\beta$ of Claim 8.3 can be played by the lower bound of $f$ inside $U_i$ given at the bottom of page 40 i.e $\frac{c^i } {\sqrt{\delta}}$.

But to apply Lemma $8.4$ to these $U_i$ one needs to show that $\vert U_i\vert \leq \delta$. Why is that true?

  • (4) Is there a way to see the section 8 and 5 of this paper as a "SOS hierarchy"? I am not able to see these proofs as solving such a "SOS hierarchy/program". Could someone kindly elaborate on this interpretation?

by user6818 at March 27, 2015 04:05 PM

/r/types

2 dimensional proof theory?

Hi all,

By 2 dimensional proof theory, I'm referring to the equations that arise between proofs given some proof term model. For example, the two proofs

 ----- . A . ----- ----- A A B ----- inL ------ ------ A + B A -> A B -> B ----------------------------- +-elim A 

and

 A ----- A 

are typically equated because of the beta reduction rule for + in the lambda calculus proof term model. However there's no need to equate them apriori: we can consider the collection of proofs freely generated by our rules without quotienting them by such relations. (I have some more interesting examples but they require a lot of background explanation.)

Does anyone know of a treatment of this concept?

submitted by seriousreddit
[link] [10 comments]

March 27, 2015 04:05 PM

High Scalability

Stuff The Internet Says On Scalability For March 27th, 2015

Hey, it's HighScalability time:


@scienceporn: That Hubble Telescope picture explained in depth. I have never had anything blow my mind so hard.

  • hundreds of billions: files in Dropbox; $2 billion: amount Facebook saved building their own servers, racks, cooling, storage, flat fabric, etc.
  • Quotable Quotes:
    • Buckminster Fuller: I was born in the era of the specialist. I set about to be purposely comprehensive. I made up my mind that you don't find out something just to entertain yourself. You find out things in order to be able to turn everything not just into a philosophical statement, but actual tools to reorganize the environment of man by which greater numbers of men can prosper. That's been my main undertaking.
    • @mjpt777: PCI-e based SSDs are getting so fast. Writing at 1GB/s for well less than $1000 is so cool.
    • @DrQz: All meaning has a pattern, but not all patterns have a meaning.
    • Stu: “Exactly once” has *always* meant “at least once but dupe-detected”. Mainly because we couldn’t convince customers to send idempotent and communitative state changes.
    • @solarce: When some companies have trouble scaling their database they use Support or Consultants. Apple buys a database vendor. 
    • @nehanarkhede: Looks like Netflix will soon surpass LinkedIn's Kafka deployment of 800B events a day. Impressive.
    • @ESPNFantasy: More than 11.57 million brackets entered. Just 14 got the entire Sweet 16 correct.
    • @BenedictEvans: A cool new messaging app getting to 1m users is the new normal. Keeping them, and getting to 100m, is the question.
    • @jbogard: tough building software systems these days when your only two choices are big monoliths and microservices
    • @nvidia: "It isn't about one GPU anymore, it's about 32 GPUs" Andrew Ng quotes Jen-Hsun Huang. GPU scaling is important #GTC15

  • FoundationDB, a High Scalability advertiser and article contributer, has been acquired. Apple scooped them up. Though saving between 5% to 10% less hardware than Cassandra seems unlikely. And immediately taking their software off GitHub is a concerning trend. It adds uncertainty to the entire product selection dance. Something to think about.

  • In the future when an AI tries to recreate a virtual you from your vast data footprint, the loss of FriendFeed will create a big hole in your virtual personality. I think FF catches a side of people that isn't made manifest in other mediums. Perhaps 50 years from now people will look back on our poor data hygiene with horror and disbelief. How barbaric they were in the past, people will say. 

  • When the nanobots turn the world to goo this 3D printer can recreate it again. New 3-D printer that grows objects from goo. Instead of a world marked by an endless battle of good vs evil we'll have a ceaseless cycle of destruction and rebirth through goo. That's unexpected. A modern mythology in the making.

Don't miss all that the Internet has to say on Scalability, click below and become eventually consistent with all scalability knowledge (which means this post has many more items to read so please keep on reading)...

by Todd Hoff at March 27, 2015 03:56 PM

StackOverflow

How to handle exceptions in Controller constructors in Play

I'm using Play 2.3.7. I have a Global.onError method and it gets called when an exception is raised in an Action. However, it does not get called when an exception is raised in the constructor of a play.api.mvc.Controller. The default error page is served instead.

The code looks something like this:

object MyController extends Controller {
  assert(false)
  val something = Action { request => ??? }
}

The assertion failure happens the first time a request is routed to the controller. It is logged in the ! Internal server error, for ... format, but not handled by Global.onError. How could I handle this exception?

by Daniel Darabos at March 27, 2015 03:47 PM

Ansible, YAML, and Syntax

I am trying to create an Ansible configuration that will run a playbook and utilize a single variable file to create a single configuration with multiple items. I am trying the following syntax and it is failing. How can I fix this?

vars/main.yml

---
or1host1:

      - interface: 1/1
        description: or1-servertest
        TRUNK: true
        allowedVlans: 101-103
        NVLAN: true
        nativeVLAN: 101
        ACCESS: false
        accessVlan: none
        PC: true
        pcNUM: 10

      - interface: 1/2
        description: or1-servertest2
        TRUNK: false
        allowedVlans: 101-103
        NVLAN: false
        nativeVLAN: 101
        ACCESS: true
        accessVlan: none
        PC: true
        pcNUM: 10

templates/nxos.j2

{% for interface in or1host1 %}
interface Ethernet{{item.interface}}
description {{item.description}}
{% if item.TRUNK %}
  switchport mode trunk
  switchport trunk allowed vlan {{item.allowedVlans}}
  spanning-tree port type edge trunk
{% if item.NVLAN %}
  switchport trunk native vlan {{item.nativeVLAN}}
{% endif %}
{% endif %}
{% if item.ACCESS %}
  switchport mode access
  switchport access vlan {{item.accessVlan}}
  spanning-tree port type edge
{% endif %}
{% if item.PC %}
  channel-group {{item.pcNUM}} mode active
{% endif %}
  no shut
{% endfor %}

I am receiving the following error when running the playbook.

PLAY [Generate Configuration Files] ******************************************* 

GATHERING FACTS *************************************************************** 
ok: [localhost]

TASK: [nxos | Generate configuration files] *********************************** 
fatal: [localhost] => {'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'str object' has no attribute 'interface'", 'failed': True}
fatal: [localhost] => {'msg': 'One or more items failed.', 'failed': True, 'changed': False, 'results': [{'msg': "AnsibleUndefinedVariable: One or more undefined variables: 'str object' has no attribute 'interface'", 'failed': True}]}

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/gituser/site.retry

localhost                  : ok=1    changed=0    unreachable=1    failed=

0

by Matt at March 27, 2015 03:42 PM

ansible: how to iterate over all registered results?

Given the following playbook:

---
- name: Check if log directory exists - Step 1
  stat: path="{{ wl_base }}/{{ item.0.name }}/{{ wl_dom }}/servers/{{ item.1 }}/logs" get_md5=no
  register: log_dir
  with_subelements:
    - wl_instances
    - servers

- name: Check if log directory exists - Step 2
  fail: msg="Log directory does not exists or it is not a symlink."
  failed_when: >
    log_dir.results[0].stat.islnk is not defined
    or log_dir.results[0].stat.islnk != true
    or log_dir.results[0].stat.lnk_source != "{{ wl_base }}/logs/{{ wl_dom }}/{{ item.1 }}"
  with_subelements:
    - wl_instances
    - servers

that is using the following vars:

---
wl_instances:
  - name: aservers
    servers:
      - AdminServer
  - name: mservers
    servers:
       - "{{ ansible_hostname }}"

the second task currently only uses one of the two possible results (results[0]).

My question is: how could I iterate over all available items stored in log_dir.results?

A sample output debug:hostvars[inventory_hostname] follows:

    "log_dir": {
        "changed": false,
        "msg": "All items completed",
        "results": [
            {
                "changed": false,
                "invocation": {
                    "module_args": "path=\"/path/to/servers/aservers/domain/AdminServer/logs\" get_md5=no",
                    "module_name": "stat"
                },
                "item": [
                    {
                        "name": "aservers"
                    },
                    "AdminServer"
                ],
                "stat": {
                    ...
                    "lnk_source": "/path/to/logs/domain/AdminServer",
                    ...
                }
            },
            {
                "changed": false,
                "invocation": {
                    "module_args": "path=\"/path/to/servers/mservers/domain/servers/some_hostname/logs\" get_md5=no",
                    "module_name": "stat"
                },
                "item": [
                    {
                        "name": "mservers"
                    },
                    "some_hostname"
                ],
                "stat": {
                    ...
                    "lnk_source": "/path/to/logs/domain/some_hostname",
                    ...

by dawud at March 27, 2015 03:42 PM

QuantOverflow

Is there any way to easily estimate and forecast seasonal ARIMA-GARCH model in any software?

I use R to estimate a seasonal ARIMA(8,0,0)(5,0,1)[7] model for the seasonal differences of logs of daily electricity prices:

daily.fit <- arima(sd_l_daily_adj$Price,
                   order=c(8,0,0),
                   seasonal=list(order=c(5,0,1), period=7),
                   xreg = sd_l_daily_adj$Holiday,
                   include.mean=FALSE)

Problem is that from all the packages I've tried, only the R's base arima function allows for the seasonal specification. Packages with GARCH estimation functions such as fGarch and rugarch only allow for ordinary ARMA(p, q) specification for the mean equation.

Any suggestions for any kind of software are welcome,

Thanks

by stofer at March 27, 2015 03:39 PM

StackOverflow

How Do Callbacks work in Non-blocking Design?

Looked at a few other questions but didn't quite find what I was looking for. Im using Scala but my questions is very high level and so is hopefully agnostic of any languages.


A regular scenario:

  1. Thread A runs a function and there is some blocking work to be done (say a DB call).
  2. The function has some non-blocking code (eg. Async block in Scala) to cause some sort of 'worker' Thread B (in a different pool) to pick up the I/O task.
  3. The method in Thread A completes returning a Future which will eventually contain the result and Thread A is returned to its pool to quickly pick up another request to process.

Q1. Some thread somewhere usually has to wait?

My understanding of non-blocking architectures is that the common approach is to still have some Thread waiting/blocking on the I/O work somewhere - its just a case of having different pools which have access to different cores so that a small number of request processing threads can manage a large number of concurrent requests without ever waiting on a CPU core.

Is this a correct general understanding?

Q2. How the callback works ?

In the above scenario - Thread B that is doing the I/O work will run the callback function (provided by Thread A) if/when the I/O work has completed - which completes the Future with some Result.

Thread A is now off doing something else and has no association any more with the original request. How does the Result in the Future get sent back to the client socket? I understand that different languages have different implementations of such a mechanism but at a high level my current assumption is that (regardless of the language/framework) some framework/container objects must always be doing some sort of orchestration so that when a Future task is completed the Result gets sent back to the original socket handling the request.


I have spent hours trying to find articles which will explain this but every article seems to just deal with real low-level details. I know Im missing some details but i am having difficulty asking my question because Im not quite sure which parts Im missing :)

by JamieP at March 27, 2015 03:34 PM

/r/netsec

StackOverflow

MongoDB LineString geofence with maxDistance

I would like to query a LineString with maxDistance usind mongoDB an scala. The LineString contains location points of one route.

This works very fine for a Point:

val geo = MongoDBObject(
  "loc" -> MongoDBObject(
    "$nearSphere" -> MongoDBObject(
      "type" -> "Point",
      "coordinates" -> MongoDBList( lat,
        lon)
    ),
    "$maxDistance" -> rad ))

val geoEvents=mongoColl.find(geo)
println(geoEvents)

Trying to convert this for LineString does not work.

val geo = MongoDBObject(
  "loc" -> MongoDBObject(
    "$nearSphere" -> MongoDBObject(
      "type" -> "LineString",
      "coordinates" -> MongoDBList( MongoDBList(52.5564, 13.37.86), MongoDBList(52.5586,13.38.62), MongoDBList(52.5608,13.3786))
    ),
    "$maxDistance" -> 300 ))

val geoEvents=mongoColl.find(geo)
println(geoEvents)

It is possible to make a geofence query with LineString?

by masterWN at March 27, 2015 03:29 PM

CompsciOverflow

If $F$ is valid then $F \cup \{res(C_1,C_2,A_i)\}$ is valid

I have to prove the following problem in propositional logic:

Let $F$ be a set of clauses and let $F' = F \cup \{res(C_1,C_2,A_i)\}$ be the extension of $F$ by a resolvent of some clauses $C_1,C_2 \in F$ where $A_i$ is a literal occuring positively in $C_1$ and negatively in $C_2$.

Prove that: If $F$ is valid, then $F'$ is valid.

So in other words I have to prove that when I construct the union of the original formula $F$ and the formula resulting by applying resolution on $F$ over a literal $A_i$ in $F$, validity is still preserved.

I think that this should be provable by applying a direct proof.

Recall: Note that resolution is defined as follows: given two clauses: $C_1 = (A_1 \lor \dots \lor A_i \lor \dots \lor A_n)$ and $C_2 = (B_1 \lor \dots \lor B_j \lor \dots \lor B_m)$ such that for some $i, j$ with $1 \leq i \leq n$, and $1 \leq j \leq m,\; A_i = \neg B_j$,
the resolvent of $C_1$ and $C_2$ on $A_i$ is the clause

$res(C_1,C_2,A_i) = (A_1 \lor \dots \lor A_{i-1} \lor A_{i+1} \lor \dots \lor A_n \lor B_1 \lor \dots B_{j-1} \lor B_{j+1} \lor \dots \lor B_m)$

EDIT: Here is my try:

Let $I$ be an interpratation taken from the set of models $Mod(F)$. Hence, $I(F) = 1$. Because this interpretation satisfies $F$ it must also be the case that it satisfies $C_1$ and $C_2$. If we take a look at the structure of $C_1$ and $C_2$ we have to distinguish between $2$ different cases:

(1) $A_i$ is positive in $C_1$ but negative in $C_2$ and

(2) $A_i$ is negative in $C_1$ but positive in $C_2$.

In case (1) if $A_i$ is set to true in $C_1$ it would be false in $C_2$. In case (2) if $A_i$ is set to true in $C_2$ it would be false in $C_1$.

$A_i$ cannot be the literal which preserves satisfiability of $F$. Hence $I$ must include some assignment of other literals in $F$ such that $F$ is satisfied. Thus, the resolvant $res(C_1,C_2,A_i)$ is also satisfied.

by user1291235 at March 27, 2015 03:16 PM

StackOverflow

How can I remove specific element from a list?

I know this might be a silly question, but I don't get it. I have some data:

(def x (range 1 14))

-> (1 2 3 4 5 6 7 8 9 10 11 12 13)

I want to return a list without "3" for example. Googling "clojure remove item from list" took me to this:

  (remove pred coll)

So I tried to use example with even?:

  (remove even? x) 

  -> (1 3 5 7 9 11 13)

Great! It works with my data! I just need to change pred. And my first guess would be:

  (remove (= 3) x)

  java.lang.ClassCastException: java.lang.Boolean cannot be cast to clojure.lang.IFn

Ok, we don't need to evaluate (= 3), so let's put # before:

  (remove #(= 3) x)

  clojure.lang.ArityException: Wrong number of args (1)` passed to...

I know it is trivial, but still how can I do it?

by m3nthal at March 27, 2015 03:13 PM

NoClassDefFoundError: CompanionConversion in scala activerecord

My goal is to connect with MySQL db and read/write to tables, however I keep getting the error in the run time (compile is fine):

Exception in thread "main" java.lang.NoClassDefFoundError:com/github/aselab/activerecord/inner/CompanionConversion

My code is like this:

build.sbt:

name := "Simple Project"
version := "1.0"
scalaVersion := "2.10.4"

libraryDependencies ++= Seq(
  "com.github.aselab" %% "scala-activerecord" % "0.3.1",
  "org.slf4j" % "slf4j-nop" % "1.7.10", // other options are: slf4j-simple, logback-classic, etc...
  "mysql" % "mysql-connector-java" % "5.1.34"
)

./src/main/scala/Test.scala:

import com.github.aselab.activerecord._
import com.github.aselab.activerecord.dsl._
import dsl._

class Test {
  case class DateTest(var unix_timestamp: Long) extends ActiveRecord 
  object Tables extends ActiveRecordTables {
    val date_tests = table[DateTest]

  }   
  Tables.initialize(Map("schema" -> "tables.dev"))
  object DateTest extends ActiveRecordCompanion[DateTest]
  val a1 = DateTest.findBy("id" -> 1)
  println(s"field value: ${a1.get.unix_timestamp}")

./src/main/resources/application.conf:

tables {
  dev {
    driver = "some.Driver"
    jdbcurl = "some_url"
    username = "some_username"
    password = "some_password"
    partitionCount = 5 
    maxConnectionsPerPartition = 1 
    minConnectionsPerPartition = 5 
    autoCreate = false
    autoDrop = false
  }
  test {
    driver = "some.Driver"
    jdbcurl = "some:test"
    autoCreate = true
    autoDrop = true
  }
}

Any help is appreciated!

by Iam619 at March 27, 2015 03:12 PM

/r/compsci

StackOverflow

How to install SecureSocial on Play Frame work 2.3.8?

I have been working on learning how to use the Play framework. It is my understanding that plugins can be use to add functionality that others have developed. Maybe I should be calling it a module? I read that SercureSocial is one of the best modules available for authentication. But the documentation isn't really getting me anywhere. Can someone help me understand how to add the master-snapshot to my existing java project?

Lets assume that I performed activator new my-first-app play-scala activator eclipse The I imported into eclipse the project

The next step is to try to follow directions from the following url

http://securesocial.ws/guide/installation.html

After reading that I am still lost.

There is no Build.scala file, but I see there is a build.sbt file. Do I add this block to the build.sbt file?

object ApplicationBuild extends Build {
val appName         = "my-first-app"
val appVersion      = "1.0-SNAPSHOT"

val appDependencies = Seq(
    "ws.securesocial" %% "securesocial" % "master-SNAPSHOT"
)
val main = play.Project(appName, appVersion, appDependencies).settings(
    resolvers += Resolver.sonatypeRepo("releases")
)

val main = play.Project(appName, appVersion, appDependencies).settings(
  resolvers += Resolver.sonatypeRepo("snapshots")
)

After copying the block above, I created the ply.plugins file in the conf folder. Then I copied all the plugins to the file and saved

1500:com.typesafe.plugin.CommonsMailerPlugin
9994:securesocial.core.DefaultAuthenticatorStore
9995:securesocial.core.DefaultIdGenerator
9996:securesocial.core.providers.utils.DefaultPasswordValidator
9997:securesocial.controllers.DefaultTemplatesPlugin
9998:your.user.Service.Implementation <-- Important: You need to change this
9999:securesocial.core.providers.utils.BCryptPasswordHasher
10000:securesocial.core.providers.TwitterProvider
10001:securesocial.core.providers.FacebookProvider
10002:securesocial.core.providers.GoogleProvider
10003:securesocial.core.providers.LinkedInProvider
10004:securesocial.core.providers.UsernamePasswordProvider
10005:securesocial.core.providers.GitHubProvider
10006:securesocial.core.providers.FoursquareProvider
10007:securesocial.core.providers.XingProvider
10008:securesocial.core.providers.VkProvider
10009:securesocial.core.providers.InstagramProvider

Next, I copy all the routes into the routes file

# Login page
GET     /login                      securesocial.controllers.LoginPage.login
GET     /logout                     securesocial.controllers.LoginPage.logout

# User Registration and password handling 
GET     /signup                     securesocial.controllers.Registration.startSignUp
POST    /signup                     securesocial.controllers.Registration.handleStartSignUp
GET     /signup/:token              securesocial.controllers.Registration.signUp(token)
POST    /signup/:token              securesocial.controllers.Registration.handleSignUp(token)
GET     /reset                      securesocial.controllers.Registration.startResetPassword
POST    /reset                      securesocial.controllers.Registration.handleStartResetPassword
GET     /reset/:token               securesocial.controllers.Registration.resetPassword(token)
POST    /reset/:token               securesocial.controllers.Registration.handleResetPassword(token)
GET     /password                   securesocial.controllers.PasswordChange.page
POST    /password                   securesocial.controllers.PasswordChange.handlePasswordChange

# Providers entry points
GET     /authenticate/:provider     securesocial.controllers.ProviderController.authenticate(provider)
POST    /authenticate/:provider     securesocial.controllers.ProviderController.authenticateByPost(provider)
GET     /not-authorized             securesocial.controllers.ProviderController.notAuthorized

I then try to run the project and I get the following error

[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: ws.securesocial#securesocial;2.1.4: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn]  Note: Some unresolved dependencies have extra attributes.  Check that these dependencies exist with the requested attributes.
[warn]          ws.securesocial:securesocial:2.1.4 (scalaVersion=2.10, sbtVersion=0.13)
[warn]
sbt.ResolveException: unresolved dependency: ws.securesocial#securesocial;2.1.4:not found
    at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:217)
    at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:126)
    at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:125)
    at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
    at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:115)
    at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:103)
    at sbt.IvySbt.sbt$IvySbt$$action$1(Ivy.scala:48)
    at sbt.IvySbt$$anon$3.call(Ivy.scala:57)
    at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:93)
    at xsbt.boot.Locks$GlobalLock.xsbt$boot$Locks$GlobalLock$$withChannelRetries$1(Locks.scala:78)
    at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:97)
    at xsbt.boot.Using$.withResource(Using.scala:10)
    at xsbt.boot.Using$.apply(Using.scala:9)
    at xsbt.boot.Locks$GlobalLock.ignoringDeadlockAvoided(Locks.scala:58)
    at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:48)
    at xsbt.boot.Locks$.apply0(Locks.scala:31)
    at xsbt.boot.Locks$.apply(Locks.scala:28)
    at sbt.IvySbt.withDefaultLogger(Ivy.scala:57)
    at sbt.IvySbt.withIvy(Ivy.scala:98)
    at sbt.IvySbt.withIvy(Ivy.scala:94)
    at sbt.IvySbt$Module.withModule(Ivy.scala:115)
    at sbt.IvyActions$.update(IvyActions.scala:125)
    at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1223)
    at sbt.Classpaths$$anonfun$sbt$Classpaths$$work$1$1.apply(Defaults.scala:1221)
    at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1244)
    at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$74.apply(Defaults.scala:1242)
    at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
    at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1246)
    at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:1241)
    at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
    at sbt.Classpaths$.cachedUpdate(Defaults.scala:1249)
    at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1214)
    at sbt.Classpaths$$anonfun$updateTask$1.apply(Defaults.scala:1192)
    at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
    at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:42)
    at sbt.std.Transform$$anon$4.work(System.scala:64)
    at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
    at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:237)
    at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
    at sbt.Execute.work(Execute.scala:244)
    at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
    at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:237)
    at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
    at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
[error] (*:update) sbt.ResolveException: unresolved dependency: ws.securesocial#securesocial;2.1.4: not found
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore?

by pitchblack408 at March 27, 2015 03:02 PM

Spark application throws javax.servlet.FilterRegistration

I'm using Scala to create and run a Spark application locally.

My build.sbt:

name := "SparkDemo"

version := "1.0"

scalaVersion := "2.10.4"


libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0"    exclude("org.apache.hadoop", "hadoop-client")

libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.2.0"


libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0"  excludeAll(
ExclusionRule(organization = "org.eclipse.jetty"))

libraryDependencies += "org.apache.hadoop" % "hadoop-mapreduce-client-core" % "2.6.0"


libraryDependencies += "org.apache.hbase" % "hbase-client" % "0.98.4-hadoop2"

libraryDependencies += "org.apache.hbase" % "hbase-server" % "0.98.4-hadoop2"

libraryDependencies += "org.apache.hbase" % "hbase-common" % "0.98.4-hadoop2"

mainClass in Compile := Some("demo.TruckEvents")

During runtime I get the exception Exception in thread "main" java.lang.ExceptionInInitializerError during calling of... Caused by: java.lang.SecurityException: class "javax.servlet.FilterRegistration"'s signer information does not match signer information of other classes in the same package

The exception is triggered here:

val sc = new SparkContext("local", "HBaseTest")

I am using the IntelliJ Scala/SBT plugin.

I've seen that other people have also this problem suggestion solution. But this is a maven build....is my sbt wrong here ? - Or any other suggestion how I can solve this problem ? Thanks.

by Hawk66 at March 27, 2015 02:55 PM

Lobsters

StackOverflow

How to set run arguments when using Ansible to deploy docker?

When using ansible to deploy docker, how do you set the ARGs? That's the ARGs in the following docker command

docker create  --name my_container my_image ARGS

I tried to set the ARGS in the docker: command variable but it wasn't picked up. What's the correct way to set the run ARGS? Here is what I tried

- name: deploy docker image
  sudo: yes
  docker:
    image: "{{ docker_image_name }}:{{ docker_image_version }}"
    state: reloaded
    name: "{{ docker_container_name }}"
    command: "{{ docker_args }}"

in my group vars I have something like

my_hosts:vars
  docker_args="-Dconfig=qa.conf"

my docker file has an entry point

ENTRYPOINT ["bin/my_application"]

by KailuoWang at March 27, 2015 02:42 PM

Temporarily replace a variable for an include

I have a task like this:

- include: tasks/install_nginx_vhost.yml
  vars:
    domain_name: learn.{{ domain_name }}

- include: tasks/install_nginx_vhost.yml
  vars:
    domain_name: author.{{ domain_name }}

But I am getting this error:

recursive loop detected in template string

Is it possible to temporarily (only for the include) override a variable like this? Because I don't want to create additional variables.

by warvariuc at March 27, 2015 02:36 PM

NullPointerException with json feeder in tests scope with gatling and scalatest

I am trying to develop some tests with the scalatest framework to my gatling project but I'm not able to load the jsonFile I'm using inside this project.

I'm always having a nullPointerException.

I did a copy of my resources directory I'm using in the main folder to the test one but it's never recognized. Do I have to specify a data folder especially for this scope ?

Here's my build configuration inside my pom.xml :

<build>
    <plugins>
        <plugin>
            <groupId>io.gatling</groupId>
            <artifactId>gatling-maven-plugin</artifactId>
            <version>${gatling-plugin.version}</version>
            <executions>
                <execution>
                    <phase>install</phase>
                    <goals>
                        <goal>execute</goal>
                    </goals>
                    <configuration>
                        <!-- Default values -->
                        <!--<configFolder>src/test/resources</configFolder-->
                        <dataFolder>src/main/resources/data</dataFolder>
                        <resultsFolder>target/gatling/results</resultsFolder>
                        <!--&lt;!&ndash;<requestBodiesFolder>src/test/resources/request-bodies</requestBodiesFolder>-->
                        <simulationsFolder>src/main/scala</simulationsFolder>
                        <simulationClass>com.awesomecompany.scenarios.Scenarios</simulationClass>
                    </configuration>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>org.scalastyle</groupId>
            <artifactId>scalastyle-maven-plugin</artifactId>
            <version>0.6.0</version>
            <configuration>
                <verbose>true</verbose>
                <failOnViolation>true</failOnViolation>
                <includeTestSourceDirectory>true</includeTestSourceDirectory>
                <failOnWarning>false</failOnWarning>
                <sourceDirectory>${basedir}/src/main/scala</sourceDirectory>
                <testSourceDirectory>${basedir}/src/test/scala</testSourceDirectory>
                <!--<configLocation>${basedir}/lib/scalastyle_config.xml</configLocation>-->
                <!--<outputFile>${project.basedir}/scalastyle-output.xml</outputFile>-->
                <outputEncoding>UTF-8</outputEncoding>
            </configuration>
            <executions>
                <execution>
                    <goals>
                        <goal>check</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
            <version>2.18.1</version>
            <dependencies>
                <dependency>
                    <groupId>org.apache.maven.surefire</groupId>
                    <artifactId>surefire-junit47</artifactId>
                    <version>2.18.1</version>
                </dependency>
            </dependencies>
        </plugin>
    </plugins>
</build>

Here's the stack trace :

java.lang.NullPointerException
    at io.gatling.core.config.GatlingFiles$.dataDirectory(GatlingFiles.scala:38)
    at io.gatling.core.config.Resource$.feeder(Resource.scala:64)
    at io.gatling.core.feeder.FeederSupport$class.jsonFile(FeederSupport.scala:40)
    at io.gatling.core.Predef$.jsonFile(Predef.scala:32)
    at com.mycompany.tools.ScenarioParserTest$$anonfun$1.apply$mcV$sp(ScenarioParserTest.scala:19)
    at com.mycompany.tools.ScenarioParserTest$$anonfun$1.apply(ScenarioParserTest.scala:18)
    at com.mycompany.tools.ScenarioParserTest$$anonfun$1.apply(ScenarioParserTest.scala:18)
    at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
    at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
    at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
    at org.scalatest.Transformer.apply(Transformer.scala:22)
    at org.scalatest.Transformer.apply(Transformer.scala:20)
    at org.scalatest.FlatSpecLike$$anon$1.apply(FlatSpecLike.scala:1647)
    at org.scalatest.Suite$class.withFixture(Suite.scala:1122)
    at org.scalatest.FlatSpec.withFixture(FlatSpec.scala:1683)
    at org.scalatest.FlatSpecLike$class.invokeWithFixture$1(FlatSpecLike.scala:1644)
    at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656)
    at org.scalatest.FlatSpecLike$$anonfun$runTest$1.apply(FlatSpecLike.scala:1656)
    at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
    at org.scalatest.FlatSpecLike$class.runTest(FlatSpecLike.scala:1656)
    at org.scalatest.FlatSpec.runTest(FlatSpec.scala:1683)
    at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714)
    at org.scalatest.FlatSpecLike$$anonfun$runTests$1.apply(FlatSpecLike.scala:1714)
    at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
    at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
    at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:390)
    at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:427)
    at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
    at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
    at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
    at org.scalatest.FlatSpecLike$class.runTests(FlatSpecLike.scala:1714)
    at org.scalatest.FlatSpec.runTests(FlatSpec.scala:1683)
    at org.scalatest.Suite$class.run(Suite.scala:1424)
    at org.scalatest.FlatSpec.org$scalatest$FlatSpecLike$$super$run(FlatSpec.scala:1683)
    at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760)
    at org.scalatest.FlatSpecLike$$anonfun$run$1.apply(FlatSpecLike.scala:1760)
    at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
    at org.scalatest.FlatSpecLike$class.run(FlatSpecLike.scala:1760)
    at org.scalatest.FlatSpec.run(FlatSpec.scala:1683)
    at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
    at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
    at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
    at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
    at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
    at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
    at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
    at org.scalatest.tools.Runner$.run(Runner.scala:883)
    at org.scalatest.tools.Runner.run(Runner.scala)
    at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:138)
    at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:28)

by Juan Wolf at March 27, 2015 02:34 PM

/r/netsec

StackOverflow

Slick error while compiling table definitions: could not find implicit value for parameter tm

I am completely new to Slick. I am trying to create a basic table type, but it just doesn't compile. Here's my code:

import scala.slick.driver.PostgresDriver._
import scala.slick.lifted.Tag
import scala.slick.lifted.Column
import scala.slick.lifted.ProvenShape

class Documents(tag: Tag) extends Table[(Long, String, String)](tag, "DOCUMENTS") {
       def id: Column[Long] = column[Long]("ID", O.PrimaryKey)
       def `type`: Column[String] = column[String]("TYPE")
       def data: Column[String] = column[String]("DATA")

       def * : ProvenShape[(Long, String, String)] = (id, `type`, data)
}

And I get these errors:

<console>:13: error: could not find implicit value for parameter tm: scala.slick.ast.TypedType[Long]
              def id: Column[Long] = column[Long]("ID", O.PrimaryKey)
                                           ^
<console>:14: error: could not find implicit value for parameter tm: scala.slick.ast.TypedType[String]
              def `type`: Column[String] = column[String]("TYPE")
                                                   ^
<console>:15: error: could not find implicit value for parameter tm: scala.slick.ast.TypedType[String]
              def data: Column[String] = column[String]("DATA")
                                                 ^

by Szakállas Dávid at March 27, 2015 02:28 PM

Akka not shutting down cleanly in Servlet

I'm developing a Servlet-based web application in Scala, and using Akka. Everything works fine while it's up and running, I see no errors when viewing code, my child actors all bring themselves up and shut themselves down correctly. However; when I try to shut down my server, or re-deploy I get a lot of errors in the console:

Shutting down...
Shut down successfully.
27-Mar-2015 11:21:22.233 SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [ROOT] created a ThreadLocal with key of type [java.lang.ThreadLocal] (value [java.lang.ThreadLocal@68d6d1aa]) and a value of type [scala.concurrent.forkjoin.ForkJoinPool.Submitter] (value [scala.concurrent.forkjoin.ForkJoinPool$Submitter@1045f98]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
27-Mar-2015 11:21:22.233 SEVERE [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.checkThreadLocalMapForLeaks The web application [ROOT] created a ThreadLocal with key of type [scala.util.DynamicVariable$$anon$1] (value [scala.util.DynamicVariable$$anon$1@5f818593]) and a value of type [org.apache.tomcat.util.log.SystemLogHandler] (value [org.apache.tomcat.util.log.SystemLogHandler@5b616a23]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
27-Mar-2015 11:21:22.241 INFO [main] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["http-nio-8080"]
27-Mar-2015 11:21:22.243 INFO [main] org.apache.coyote.AbstractProtocol.stop Stopping ProtocolHandler ["ajp-nio-8009"]
27-Mar-2015 11:21:22.244 INFO [main] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["http-nio-8080"]
27-Mar-2015 11:21:22.244 INFO [main] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["ajp-nio-8009"]
Disconnected from server

If I try use a regular, synchronous servlet I don't get the messages in the log upon shutdown.

I have a context listener on the servlet which is the following:

class ContextListener extends ServletContextListener {
    private var system: ActorSystem = _

    override def contextInitialized(sce: ServletContextEvent): Unit = {
        val context = sce.getServletContext

        system = ActorSystem.create("StridentStandard")

        context.setAttribute("actor.system", system)
    }

    override def contextDestroyed(sce: ServletContextEvent): Unit = {
        println("Shutting down...")
        system.shutdown()
        system.awaitTermination()
        println("Shut down successfully.")
    }
}

So, from what it looks like - the ActorSystem should be shutting down correctly - but it seems some threads are hanging on.

I'm fairly new to Scala, and Akka... and concurrency, and as a result am not really sure where to go from here.

by Seer at March 27, 2015 02:06 PM

/r/netsec

StackOverflow

How to check files is changed in shell command?

I have file, generated by shell command

- stat: path=/etc/swift/account.ring.gz get_md5=yes
  register: account_builder_stat

- name: write account.ring.gz file
  shell: swift-ring-builder account.builder write_ring <--- rewrite account.ring.gz
    chdir=/etc/swift
  changed_when: ??? account_builder_stat.changed ??? <-- no give desired effect

How can I check that the file has been changed?

by Hett at March 27, 2015 01:40 PM

QuantOverflow

Lease Accounting / FX Embedded Derivatives

I have a lease agreement where the functional currency is USD, domestic currency is UAH. Lease agreement is written in EUR (rent rate) and payments are to be done in UAH in the amount of rent rate (EUR) * UAH/EUR exchange rate. Should I account it as an embedded derivative and value separately?

The same question applies to the following situation I have a lease agreement where the functional currency is USD, domestic currency is UAH. Lease agreement is written in USD (rent rate) and payments are to be done in UAH in the amount of rent rate (EUR) * UAH/USD exchange rate. Should I account it as an embedded derivative and value separately?

Thank you in advance

by Dasha Sladkova at March 27, 2015 01:38 PM

How to fit a SARIMA + GARCH in R?

I'd like to fit a non stationary time series using a SARIMA + GARCH model. I have not found any package that allow me to fit this model. I'm using rugarch:

model=ugarchspec( variance.model = list(model = "sGARCH", garchOrder = c(1, 1)), mean.model = list(armaOrder = c(2, 2), include.mean = T), distribution.model = "sstd") modelfit=ugarchfit(spec=model,data=y)

but it allow me only to fit an ARMA + GARCH model. Can you help me? Thank you

by Manuel at March 27, 2015 01:28 PM

StackOverflow

Clojure Long Literal String

What I want

Some programming languages have a feature for creating multi-line literal strings, for example:

some stuff ... <<EOF
  this is all part of the string
  as is this
  \ is a literal slash
  \n is a literal \ followed by a literal n
  the string ends on the next line
EOF

Question: Does Clojure have something similar to this? I realize that " handles multi-line fine, but I want it to also properly handle \ as a literal.

Thanks!

by user1383359 at March 27, 2015 01:20 PM

CompsciOverflow

Optimization in multivalued logic. Optimal strings with given patterns

This question comes from an application in multivalued logic.

Suppose, we are given an alphabet of three letters $A, B, C$ and a set of indices $1,2,3,4,5$. Consider items formed by subscripting the letters with one of the indices. Examples are the following:

$$A_1 \\ B_4 \\ C_3 \\ ... $$

Suppose, additionally, that there is a total binary relation $\mathcal{R}$ on items $*_i <= *_j$ for each pair $ i, j \in \{ 1,..., 5 \} $ where $*$ denotes one of the letters $A,B$ and $C$. The relation is reflexive and transitive.

Consider a set of strings of length $5$ freely generated by the alphabet $A, B, C, ...$ disregarding ordering. Examples are the following:

$$ AABBC \\ AAAAB \\ AABCC \\ ... $$

Since order doesn't count, a string $AABBC$ is the same as $ABBAC$.

Let's call such strings templates. The templates can be "filled in" with the indices to form tuples of items using the following rules:

  • R1: there may be no repetitions of indices disregarding the letter type, e.g.

$$A_1, A_1, B_2, C_3, C_4$$

and

$$A_1, A_2, B_2, C_1, C_4$$

are both forbidden since the index $1$ is used two times.

  • R2: ordering in a tuple does not count, e. g.

$$\left( A_3 A_2 B_1 B_4 C_5 \right)$$

is the same as

$$\left( A_2 A_3 B_1 B_4 C_5 \right)$$

One can think of such "filling in" the templates as filling in a truth table with several logical levels in general.

Let us denote the set of all freely generated strings of length $5$ from the $A,B$ and $C$ by $\mathbb{Tmp}$. Let $\mathbb{Tpl}$ denote the set of all tuples generated by index assignments according to the rules R1 and R2. We can define a mapping $U:\mathbb{Tpl} \mapsto \mathbb{Tmp}$ which removes the indices from a tuple and gives its underlying template. For example:

$$ U \left( A_2 A_3 B_1 B_4 C_5 \right) = AABBC $$

We say that a tuple $T$ satisfies the template $t$ if $U(T) = t$. We say that $T$ satisfies a set of templates $\tau = \left(t_1, t_2, ..., t_l \right)$ if there exists a template $t_i \in \tau$ for some $i \in \left( 1,2,...,l \right)$ s. t. $U(T) = t_i$.

Suppose, that we are given a set of templates $\tau$ and a total binary relation $\mathcal{R}$ on items.

What is the greatest item (greatest in the sense of the binary relation $\mathcal{R}$) of all the least items of all the tuples satisfying the templates $\tau$. In other words, what is the maximin item of the tuples?

The bruteforce algorithm is evident: take a template, build all the corresponding tuples, compare elements in each tuple and find the least, compare all the outcomes of the previous step, take the greatest. I am sure this is an $NP$-complete problem if the underlying truth table is irreducible.

What if we order all the items to obtain a sequence like this:

$$ C_2 <= B_4 <= C_1 <= C_5 <= B_1 <= A_2 <= C_3 <= A_3 <= B_3 <= C_4 <= A_4 <= A_5 <= B_5 <= A_1 <= B_2$$

Let's call this sequence $S$.

Can it simplify the problem by any chance?

For instance, consider the templates:

$$\tau = \left( AAAAB, AAABB, AAABC \right)$$

Can there be an algorithm more effective than the bruteforce? I was thinking in the following way: one can start in the sequence $S$ form left to right and "drop" items until there is no more letters to drop (otherwise, the templates are violated) or if the indexing requirement is violated.

Example of such a procedure goes like this:

drop $C_2$, drop $B_4$, drop $C_1$, drop $C_5$, drop $B_1$, drop $A_2$, drop $C_3$, drop $A_3$, $B_3$ must be picked

So no tuple may contain an item greater than $B_3$.

It seems to offer a minor complexity reduction and only for simple templates. For more complex ones, there are subtleties which I won't discuss here. But I am suspicious that there is a fundamental limitation in this problem which makes every algorithm not really better than the bruteforce.

by Valery Saharov at March 27, 2015 01:15 PM

QuantOverflow

API-based equity screeners?

I know there are APIs from different brokers that allows you to trade and also obtain information about specific companies, but I wonder if there are equity/asset screeners that are API-based and can be triggered in real-time? For example, I'd love to have an API that would alert me of any equities that are:

  • Near 52 week low
  • Have P/E < 30
  • 5 year average earnings growth is > 5%
  • etc.

I could do it myself with brokerage screeners, but they are totally manual; I'd have to build and run them. If there are APIs that can be tied to automated scripts, that'd be amazing, as it would mean both speed and coverage in terms of trading opportunities.

by Uzumaki Naruto at March 27, 2015 01:11 PM

CompsciOverflow

Emulations of atomic registers and read-modify-write (RMW) primitives in message-passing systems

The ABD algorithm in paper: Sharing Memory Robustly in Message-Passing Systems can emulate single-writer multi-reader atomic registers in message-passing systems, in the presence of processor or link failures.

Looking into the details of this algorithm (in Figure 2, page 133), I found that it implicitly assumes the "conditional write" primitives at the side of servers:

case received from w
    <W, label_w>: label_j = max{label_w, label_j}; 
                  send <ACK-W> to w;

Here, the statement label_j = max{label_w, label_j} is equivalent to if (label_j < label_w) then label_j = label_w, requiring the variable label_j maintained by each process $j$ to be monotonic. This in turn needs the if-then "conditional write" primitive.

My first question is:

(1) Is this "conditional write" primitive necessary? Do you know any literature on emulations of atomic, SWMR registers without such primitives?


In the last section of the same paper, titled "Discussion and Further Research", the authors mentioned the emulations of stronger shared memory primitives in message-passing systems in presence of failures. One typical example is read-modify-write (RMW).

My second question is:

(2) Do you know any literature on the emulations of RMW-like primitives?

Google search and a quick glance over the "cited by" papers do not bring me anything specific.

by hengxin at March 27, 2015 12:57 PM

StackOverflow

how to store a encrypt password in mysql in scala playframework 2.2?

i am new at scala,I have to store user password in data in data base, so i want that it stores in a encrypted form. can anyone refer me how to the encryption in scala 2.10 play framework 2.2.is there a way that i directly use the encryption in model function, just before the insert query of password

by user2822316 at March 27, 2015 12:55 PM

CompsciOverflow

mingw does not let me use complex.h [on hold]

I am trying to use complex.h in mingw, but it is not working. I get the error message:

$ gcc complex.c -std=c99 In file included from complex.c:2: /usr/include/complex.h:25:20: _mingw.h: No such file or directory In file included from complex.c:2:

What would be causing this? Thanks in advance

by Math525 at March 27, 2015 12:48 PM

StackOverflow

Does the organization of dataset size matter when using join in Apache Spark?

I have two RDDs I want to join. One is very large, XL and the other is regular sized, M. For speed, does it matter which order I join them? For example:

val data = M.join(XL)

vs

val data =XL.join(M)

by Climbs_lika_Spyder at March 27, 2015 12:45 PM

Planet Emacsen

Irreal: How to Install an Info File

I just wrote about implementing abo-abo's solution for having multiple Info buffers and in particular his using the gcl Info file as a substitute for the Common Lisp HyperSpec. I really wanted to try that out so I downloaded the tar file from the link that he provided and installed it in my .emacs.d directory. Then, as abo-abo explained, I added ~/.emacs.d/info to the Info-additional-directory-list variable.

Sadly, it didn't work. I asked abo-abo for his wisdom on the matter and he suggested that I just push the .emacs.d/info directory directly onto the Info-directory-list. I tried that and it didn't work either. I could get to it with the standalone Info reader by using the -f option but couldn't get to it from Emacs.

The problem was that the gcl file was not being added to the dir file that serves as the top node for Info. The proper way to do that is to run install-info. In my case

sudo install-info gcl.info /usr/share/info/dir

Annoyingly, it still didn't work. It turns out that install-info extracts data from the Info file you're installing to update the dir file. For some reason the gcl.info file didn't have the required information. I adapted the information from the sbcl.info file, added it do gcl.info, reran install-info, and everything worked fine. Unless you're trying to use the gcl.info that abo-abo linked to, you don't have to worry about that; just run install-info.

If you're trying to install the gcl file, here is what you need to add to the top of gcl.info

INFO-DIR-SECTION Software development
START-INFO-DIR-ENTRY
* gcl: (gcl).           The GNU Common Lisp compiler
END-INFO-DIR-ENTRY

by jcs at March 27, 2015 12:44 PM

StackOverflow

Scala PackratParser ignores failure parser

I have a parser that was written using Scala's RegexParsers - https://github.com/adamretter/csv-validator/blob/master/csv-validator-core/src/main/scala/uk/gov/nationalarchives/csv/validator/schema/SchemaParser.scala

It had some serious performance problems when parsing a grammar which had deeply nested expressions. As such I have created a version where I mix in Scala's PackratParsers - https://github.com/adamretter/csv-validator/blob/packrat-parsers/csv-validator-core/src/main/scala/uk/gov/nationalarchives/csv/validator/schema/SchemaParser.scala

The Packrat version does not exhibit the same performance issue and correctly parses the grammar. However, when I provide an invalid grammar for testing, e.g. https://github.com/adamretter/csv-validator/blob/packrat-parsers/csv-validator-core/src/test/scala/uk/gov/nationalarchives/csv/validator/schema/SchemaParserRulesSpec.scala#L138

The old (non-packrat) parser used to correctly report the 'Invalid rule' failure, via the failure parser combinator | failure("Invalid rule") here - https://github.com/adamretter/csv-validator/blob/master/csv-validator-core/src/main/scala/uk/gov/nationalarchives/csv/validator/schema/SchemaParser.scala#L511

When using the packrat-parser version, if I enable tracing I can see from the trace that the failure is created just as it is in the non-packrat version, however the PackratParser seems to ignore this and always return failure: Base Failure instead.

Is there something different about failure handling when using PackratParsers which I need to understand?

by adamretter at March 27, 2015 12:40 PM

CompsciOverflow

Maximum Enclosing Circle of a Given Radius

I try to find an approach to the following problem:

Given the set of point $S$ and radius $r$, find the center point of circle, such that the circle contains the maximum number of points from the set. The running time should be $O(n^2)$.

At first it seemed to be something similar to smallest enclosing circle problem, that easily can be solved in $O(n^2)$. The idea was to set an arbitrary center and encircle all point of $S$. Next, step by step, replace the circle to touch the left/rightmost points and shrink the circle to the given radius, obviously, this is not going to work.

by com at March 27, 2015 12:36 PM

Planet Clojure

Student applications due today

The student application deadline is coming up at 19;00 UTC, less than seven hours from now.  What does this mean for you?
If you are a student…
 
You must have your application submitted to Melange by 19:00 UTC.  This is a hard deadline, and we have no control over that.  On a case by case basis, during review, we may allow you to revise your application, but please be sure that the application is ready when you submit it.  Please adhere to the Student Application Deadlines.  Also, be sure that the mentor you have been working with is signed up as a mentor in Melange.
If you are considering being a mentor…
 
Please be sure to sign into Melange and request to connect with Clojure as a mentor.  Be sure to write something about who you are and what projects you are interested in mentoring when you apply.
What happens next?
 
Once the student application deadline closes, all of the Clojure mentors and admins will review all of the student proposals.  During this period, we will assess the number of good mentor/student combinations available. By the 13th of April, we have let Google know how many students we would like to have.  Over the next couple of weeks, Google will allocate student slots to all of the organisations and we will deduplicate in any cases where the same student is accepted by more than one organisation.  Finally, on the 27th of April, Google will announce the students who have been selected.  Until then, we cannot comment on who is or is not accepted.
Thanks again to everyone who is volunteering to make this effort a success.  Clojure/GSoC could not succeed without all of your efforts.  I am looking forward to a wonderful summer of code.

by clojuregsoc at March 27, 2015 12:29 PM

Lobsters

/r/netsec

StackOverflow

Howto create a .jar including both sources (.java and .scala) and classes with sbt?

I would like sbt package or any variant to produce a '.jar' from my project that would include also the sources ('.java' and '.scala' files).

This would be a mix of packageBin and packageSrc.

I did not:

  • find any task that would do this?
  • find how to adapt package task
  • nor define a new task of mine to achieve this

Thanks for any hint.

by user3407924 at March 27, 2015 12:13 PM

QuantOverflow

Proving there exists no arbitrage opportunities given 3 states and 2 assets

Assume there are 3 states of the world: w1, w2, and w3. Assume there are two assets: a risk-free asset returning Rf in each state, and a risky asset with Return R1 in state w1, R2 in state w2, and R3 in state W3. Assume the probabilities are 1/4 for state w1, 1/2 for state w2, and 1/4 for state w3. Assume Rf=1.0 and R1= 1.1, R2=1.0 and R3= 0.9.

(a) Prove that there are no arbitrage opportunities. (b) Describe the one-dimensional family of state price vectors (q1,q2,q3)>

For (a), I believe this is equivalent to showing there exists a state price vector.

I know p=Xq, but since we are only given two assets X doesn't have an inverse so I don't know how to compute q. Further, we are not given p. How do I show a state price vector exists?

by user2034 at March 27, 2015 12:11 PM

StackOverflow

install leiningen 2 on ubuntu

I have followed the instructions here:

leiningen.org

to install from the lein script. I now have a ~/.lein/self-installs/leiningen-2.4.3-standalone.jar

How do I now run leiningen? The instructions are not clear. Thanks

by user3231690 at March 27, 2015 12:07 PM

QuantOverflow

Effect of massive volatility on BS formula

I am experimenting with very high volatility on the standard Black-Scholes formula. I set risk free to zero, time to expiry to 1, volatility to 1 (=100%), and underlying to 1. Then I simulate the profit and loss on (i) the delta hedge account and (2) the option account between t=1 and t=0 and compute compare the expected return on both. There is a persistent bias in the result – the expected return on the option account is consistently higher.

I’m wondering if this is caused by the difference between volatility of the log returns and actual returns. There is little difference at low volatility, but it is huge at such massive vols. It could be calculation error of course, but I don’t think so because everything works fine at typical volatilities.

by quis est ille at March 27, 2015 12:07 PM

StackOverflow

Error running scala console. Module not found. inteillij IDEA 14

I am able to run sample codes, which i have saved in sample.sc and the results are displaying on Scala console.

But the following program I saved as Timeprogram.scala script, I am getting error as

Error running scala console. Module is not specified.

Please help.

The program is

/**

* Created by sarathrnair on 3/18/15. */

println ( "Enter the seconds" )

val totalSeconds=readInt()
val displaySeconds=totalSeconds%60
val totalMinutes=totalSeconds/60
val displayMinutes=totalMinutes%60
val displayHours=totalMinutes/60
val sec=displaySeconds.toString
val min=displayMinutes.toString
val finalString=displayHours+":"+("0"*(2-min.length))+min+":"+("0"*(2-sec.length))+sec

println (finalString)

by user3116355 at March 27, 2015 12:01 PM

DataTau

StackOverflow

Alternative to System.gc() to force PrintWriter to close

I'm writing 300,000 files using PrintWriter (from Scala). I instantiate a PrintWriter, println() to it, then close it. After a few thousand iterations, I get "java.io.IOException: Too many open files". My workaround has been to do System.gc() every 500 files, but each GC is really slow (10-20 seconds). Is there a more performant way to force Java to close files?

by Michael Malak at March 27, 2015 11:58 AM

Inverse of Supplier<T> in Guava

I'm looking for the inverse of Supplier<T> in Guava. I hoped it would be called Consumer - nope, or Sink - exists, but is for primitive values.

Is it hidden somewhere and I'm missing it?

I'd like to see it for the same kinds of reasons that Supplier is useful. Admittedly, uses are less common, but many of the static methods of Suppliers, for example, would apply in an analogous way, and it would be useful to express in one line things like "send this supplier every value in this iterable".

In the meantime, Predicate and Function<T,Void> are ugly workarounds.

by BeeOnRope at March 27, 2015 11:49 AM

Powerset Without Duplicates

I need to create a powerset function in haskell which takes a set and outputs the power set without duplicate entries, regardless of what is put in the input list. For example: [1,1] should return [[],[1]]

    powerset [] = [[]]
    powerset (x:xs) = union((powerset xs)) (map (x:) (powerset xs))

Where union is a previously defined function adjoins two sets without duplicates. The problem with the above code is that it counts duplicates as original entries so input [1,1] returns [[],[1],[1],[1,1]].

Any ideas? I've thought of using union with the input list and the empty list to scrub out duplicates, prior to triggering powerset, but I'm not sure how that would look.

by lepdeffard at March 27, 2015 11:46 AM

json4s jackson - How to ignore field using annotations

I`m using json4s-jackson(version 3.2.11).

I'm trying to ignore field using annotations(like jackson java version).

Here's exmaple:

case class User(id: Long, name: String, accessToken: String)

Following code is not working.

@JsonIgnoreProperties(Array("accessToken"))
case class User(id: Long, name: String, @JsonProperty("accessToken") accessToken: String)

by Jae-Ung Lim at March 27, 2015 11:40 AM

Lambda the Ultimate Forum

Going Against the Flow for Type-less Programming (take 3!)

This is a paper submission I'm working on for a conference deadline next week. This time is different, I've trimmed down my goals and have developed a system that really works! Anyways, abstract:

Object-oriented languages have been plagued by poor support for type inference given difficulty in combining subtype and parametric polymorphism. This paper introduces a type system called Type-less that provides useful type feedback about OO code with few type annotations. Type-less rethinks subtyping as a multi-channel relation that simplifies variance to more precisely capturing constituent type parameter data flows, and by leveraging "backward" type inference that specializes term types based on their usage rather than generalizing types to validate their usage. Type annotations are then unnecessary in client code that does not define new abstractions, and greatly reduced otherwise. The result is a fluid programming experience whose feel approaches that of a dynamic language, which is demoed as part of this paper.

Short videos I've recorded of the system in use (I'll probably redo these next week, but they are still viewable):

Type theorists might hate my approach.

March 27, 2015 11:39 AM

StackOverflow

Replacing bad performing workers in pool

I have a set of actors that are somewhat stateless and perform similar tasks. Each of these workers is unreliable and potentially low performing. In my design- I can easily spawn more actors to replace lazy ones.

The performance of an actor is assessed by itself. Is there a way to make the supervisor/actor pool do this assessment, to help decide which workers are slow enough for me to replace? Or is my current strategy "the" right strategy?

by Stas Kurilin at March 27, 2015 11:27 AM

How to force Logger.debug output in Play! framework specs2 tests?

By default all Logger output, visible when an application is running, is mute when the application is tested.

How to force the debugs, infos etc. to be shown in the specs2 reports?

by Rajish at March 27, 2015 11:16 AM

Ansible SSH Forwarding with ProxyCommand doesn't work

I have working with SSH forwarding for remote linux server. I can work with springboard server(step), but can't work with other servers which accept ssh connection from springboard server. Here's output:

$ ansible -m ping -i hosts/production remote -vvvv
<step>
<other>
<step>
<other>
<other> PasswordAuthentication=no PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey KbdInteractiveAuthentication=no ConnectTimeout=10 ForwardAgent=yes
<step> PasswordAuthentication=no PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey KbdInteractiveAuthentication=no ConnectTimeout=10 ForwardAgent=yes
other | FAILED => SSH encountered an unknown error. The output was:
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
Killed by signal 1.

<step>
<step> ConnectTimeout=10 PasswordAuthentication=no 'LANG=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 /usr/bin/python /home/myname/.ansible/tmp/ansible-tmp-1427428707.4-105400126600575/ping; rm -rf /home/myname/.ansible/tmp/ansible-tmp-1427428707.4-105400126600575/ >/dev/null 2>&1' KbdInteractiveAuthentication=no ForwardAgent=yes PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey
step | success >> {
    "changed": false,
    "ping": "pong"
}

Here's my ansible.cfg:

[ssh_connection]
 ssh_args = -o ForwardAgent=yes -F ssh_config -q
 scp_if_ssh = True

ssh_config:

 Host step
  HostName step_host
  User myname
  Port 22
  ProxyCommand none
  IdentityFile ~/.ssh/my-key

Host other
  HostName other
  User myname
  Port 22
  ProxyCommand ssh -W %h:%p step

I confirmed ssh forwarding with vagrant is worked. But ssh forwarding with remote server isn't worked. Why this not work? Thanks in advance.

by Taiki Matsuda at March 27, 2015 11:16 AM

CompsciOverflow

Going deeper with pseudo-polynomial time algorithm for set partitioning

If I have a set of (edit) positive integers, and I'm sure that the pseudo-polynomial time algorithm for partitioning the problem will not give me an answer - what would I do next?

To illustrate this problem let's take a look at this example: {100,1,2,3}.

The p-p algorithm will give an answer False, and then I can end with result: This set can be partitioned into two set with different 106-6 = 100

(The 6 is the last result from p-p algorithm on with [_][vector.size()] = True, the 106 is the sum of all digits).

But what if I really would like to know the maximum by sum two-split of this set in which every subset has the same sum - for example the result that I'm looking for should be 3. This set - {100,1,2,3} can be split (by bypassing the 100) into two subset with the same sum - {1,2} and {3}.

How can I achieve this result?

by kowal66b at March 27, 2015 11:08 AM

StackOverflow

enable macro paradise to expand macro annotations

I wanted to check some examples with annotations in macro paradise and I am getting the error, as is specified in: this example

I have related the projects, the other scala macros (not with annotations) are working very well. I have included the library paradise_2.11.6-2.1.0-M5 (in both projects also :( ). I think, I do not get what means with *to enable*. !? bthw, I am using Scala IDE in Eclipse.

by user4712458 at March 27, 2015 10:49 AM

Using patterns to populate host properties in Ansible inventory file

I have a host file that looks like

[foo]
foox 192.168.0.1 id=1
fooy 192.168.0.1 id=2
fooz 192.168.0.1 id=3

However, I'd like to more concisely write this using patterns like:

[foo]
foo[x:z] 192.168.0.1 id=[1:3]

But this is getting interpreted as id equaling the raw text of "[1:3]", rather than 1, 2, or 3. Is there a way to achieve this in the inventory file, or will I need to do something through host vars and/or group vars?

by Shark at March 27, 2015 10:46 AM

QuantOverflow

Can I do a GARCH model to forecast a time series?

I read this paper

https://research.aston.ac.uk/portal/files/240393/AURA_2_unmarked_Energy_demand_and_price_forecasting_using_wavelet_transform_and_adaptive_forecasting_models.pdf

the two authors forecasts one day ahead gas price using, between the others, a GARCH model. How does this model works? Isn't a GARCH model useful just to forecast volatility? thank you!

by Manuel at March 27, 2015 10:45 AM

CompsciOverflow

Term rewrite system is non-confluent, but cannot find different normal forms of term

I wrote a simple TRS that I believe is non-confluent, but I'm not able to find a term with two normal forms for it.

The TRS is defined on the signature $\mathcal F=\{f,\ l,\ s,\ o\}$ and the rewrite rules $\mathcal R$ are defined as follows:

  • $l\to s(o)$
  • $t(x,\ l)\to s(x)$
  • $t(l,\ x)\to s(x)$.

This TRS is clearly finite and terminating.

For the first two rules, we have a critical pair $(t(x,\ s(o)),\ s(x))$ which is not joinable. Therefore, the TRS is not confluent.

It seems that for such a simple example I should be able to find a term with two normal forms. What am I missing?

by amaurremi at March 27, 2015 10:33 AM

/r/compsci

How to create DFA from regular expression without using NFA?

Objective is to create DFA from a regular expression and using "Regular exp>NFA>DFA conversion" is not an option (because it takes a lot of time to come up with that).

I asked this question to our professor but he told me that we can use intuition and kindly refused to provide any explaination because I think he is going to ask that in midterm. He didn't even give me any resources or references to work on.

submitted by n1b2
[link] [3 comments]

March 27, 2015 10:30 AM

StackOverflow

In what scenario does self-type annotation provide behavior not possible with extends

I've tried to come up with a composition scenario in which self-type and extends behave differently and so far have not found one. The basic example always talks about a self-type not requiring the class/trait not having to be a sub-type of the dependent type, but even in that scenario, the behavior between self-type and extends seems to be identical.

trait Fooable { def X: String }
trait Bar1 { self: Fooable =>
  def Y = X + "-bar"
}
trait Bar2 extends Fooable {
  def Y = X + "-bar"
}
trait Foo extends Fooable {
  def X = "foo"
}
val b1 = new Bar1 with Foo
val b2 = new Bar2 with Foo

Is there a scenario where some form of composition or functionality of composed object is different when using one vs. the other?

Update 1: Thanks for the examples of things that are not possible without self-typing, I appreciate the information, but I am really looking for compositions where self and extends are possible, but are not interchangeable.

Update 2: I suppose the particular question I have is why the various Cake Pattern examples generally talk about having to use self-type instead of extends. I've yet to find a Cake Pattern scenario that doesn't work just as well with extends

by Arne Claassen at March 27, 2015 10:29 AM

What are the annotees of a scala macro annotation? Or how many times is macro applied

After spending quite sometime searching through scala documentations I have not found this particular bit of information. Or at least not phrased in a way I could easily understand or get any certainty out of.

I have this annotation:

class MyAnnotation extends StaticAnnotation {

  def macroTransform(annotees: Any*) = macro myImpl

}

And I have used it on two or more classes like this:

@MyAnnotation
class One {}

@MyAnnotation
class Two {}

I would like to know if the annotees will contain both the classes or if the macro will be executed twice (one for each instance of the annotation). Will I have?

annotess.map(_tree).toList == List(oneClassDef /*classdef of One*/, twoClassDef /*classdef of Two*/)
> true

Is it possible to make it so that the annotation trigger only one application of the macro with all the annotated classes in the annotees at once?

by ɭɘ ɖɵʊɒɼɖ 江戸 at March 27, 2015 10:29 AM

Does the incremental compilation speed in Scala depend on the number of classes per file?

I have written my first medium-sized project in Scala, and I am now a little worried that the slow incremental compilation time inside Eclipse might have something to do with my tendency to put my classes in relatively few, big .scala files.

My logic behind this is as follows: If I modify a small class inside a big .scala file and hit save, the compiler might only see that the entire file was somehow modified, and is therefore forced to recompile everything that's in the file along with the dependent classes, instead of just the modified class and its dependent classes.

So here's the question: Does the average number of Scala classes that you put in a single file in any way affect recompilation speed? Or to put it this way: In terms of recompilation speed, are small .scala files to be preferred over big ones, or is there really no difference?

by python dude at March 27, 2015 10:02 AM

What is the purpose of ioctl set of functions in linux?

In Linux/freeBSD kernel whenever we have to make a driver module for a device, we make a file in the /dev/ folder and use it to communicate with the other processes.

If that is so, what is the purpose of the ioctl set of functions ? Whatever information, we want to convey with the device driver can be written to/read from this file.

Can anyone please explain it ?

I have tried reading about it on tldp.org but couldn't really understand it.

by ps06756 at March 27, 2015 09:48 AM

/r/compsci

Any of you guys have trouble holding relationships because you're tied to a computer all of the time?

I've been having trouble balancing work, school, and life. To my gf it seems like I'm always in front of a computer and I kind of always am. How do you guys balance relationships with work and personal projects?

submitted by realfuzzhead
[link] [42 comments]

March 27, 2015 09:28 AM

StackOverflow

slick insert query with forceInsertQuery

I need to copy table to another same schema table. I would like to do something like insert into table1 select * from table2

In slick, it seems possible to insert with queries. There is a function with signature .insert(:Query)

In my table I defined a "id" column with auto-increment option. However slick automatically omit auto-increment column except using forceInsert method. In this case, column number doesn't match if I print the sql out:

val table = TableQuery[Table_X]
println(TableQuery[Table_Y].insertStatementFor( table.take(1000) ))

insert statement is lack of an "id" column, but table.take(1000) include it.

How can I solve this problem? I see some functions called forceInsertQuery in the source code of slick on github. I am not sure whether this can help me or not

by worldterminator at March 27, 2015 09:16 AM

/r/emacs

What keybindings do you use for completion?

There are multiple types of completion in Emacs:

Naive completion: dabbrev-expand, hippie-expand, which typically just take strings in the current buffer and use them to complete the thing at point.

Snippet completion: define-skeleton, yasnippet, which use a user-defined prefix to trigger expansion into a defined piece of code.

Smart completion: company-mode, ac-complete, which have smart backends to answer questions like 'what methods does this object have' so they only offer appropriate values.

Which do you use, and what keybindings have you set up? Do you use TAB for indentation, or completion, or some combination? Do you use separate keybindings (e.g. M-/ and C-M-i are the defaults) for different completion types, or do you have something cleverer?

submitted by DarthToaster
[link] [10 comments]

March 27, 2015 08:59 AM

StackOverflow

Why exactly is eval evil?

I know that Lisp and Scheme programmers usually say that eval should be avoided unless strictly necessary. I’ve seen the same recommendation for several programming languages, but I’ve not yet seen a list of clear arguments against the use of eval. Where can I find an account of the potential problems of using eval?

For example, I know the problems of GOTO in procedural programming (makes programs unreadable and hard to maintain, makes security problems hard to find, etc), but I’ve never seen the arguments against eval.

Interestingly, the same arguments against GOTO should be valid against continuations, but I see that Schemers, for example, won’t say that continuations are "evil" -- you should just be careful when using them. They’re much more likely to frown upon code using eval than upon code using continuations (as far as I can see -- I could be wrong).

Edit: WOW, that was fast! Three answers in less than five minutes! So, the answers so far are:

  • Not validating input from users and sending to eval is evil
  • Using eval I may end up with interpreted code instead of compiled
  • Eval could make code unreadable (although I think one can write unreadable code without any "powerful" features, so this is not much of an issue)
  • Beginners may be confused mixing compile-time and evaluation-time when mixing eval and macros (but I think it's not an issue once you get a firm grasp of how your language works -- be it Lisp or other)

So far, it seems that if I generate code (and not directly use anything from user input directly); if I know what environment eval will be run; and if I'm not expecting super-fast code, then eval is OK.

by Jay at March 27, 2015 08:59 AM

QuantOverflow

Stock market prediction [on hold]

Good Afternoon group members. My research area is financial data mining. My research topic is Stock market prediction using artificial neural network. I have implemented Back propagation algorithm. I have taken the nifty index data set and normalized it.After normalization I have divided the whole data set in to two groups training and testing groups. After that I have applied back propagation algorithm. My question is after finding out the output of the test data set, how can i predict the close value. Actually I am not getting how to predict the next future value. Is there any logic or techniques after training and testing. I have followed and searched many research paper but I am not getting any solution. My deadline is in next week.Please reply anyone. Thanks in advance.

by Rakhi Mahanta at March 27, 2015 08:39 AM

StackOverflow

Generating nested lists of Ints

I'm looking for a function:

def fun(init:Int,level:Int)

Such that:

fun(1,1) == List(1)
fun(1,2) == List(List(1),List(1))

This task is very easy in java by using a for iteration. How can I write it in Scala using the functional programing style?

by Jade Tang at March 27, 2015 08:17 AM

Scala forall to compare two lists?

If I wan't to see if each element in a list corresponds correctly to an element of the same index in another list, could I use forall to do this? For example something like

val p=List(2,4,6)
val q=List(1,2,3)
p.forall(x=>x==q(x)/2)

I understand that the x isn't an index of of q, and thats the problem I'm having, is there any way to make this work?

by ggreeppy at March 27, 2015 08:08 AM

org.apache.spark.SparkException: Task not serializable

This is a working code example:

JavaPairDStream<String, String> messages =     KafkaUtils.createStream(javaStreamingContext, zkQuorum, group, topicMap);
    messages.print();
    JavaDStream<String> lines = messages.map(new Function<Tuple2<String, String>, String>() {
        @Override
        public String call(Tuple2<String, String> tuple2) {
            return tuple2._2();
        }
    });

ERROR: org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166) at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158) at org.apache.spark.SparkContext.clean(SparkContext.scala:1435) at org.apache.spark.streaming.dstream.DStream.map(DStream.scala:438) at org.apache.spark.streaming.api.java.JavaDStreamLike$class.map(JavaDStreamLike.scala:140) at org.apache.spark.streaming.api.java.JavaPairDStream.map(JavaPairDStream.scala:46)

by xiaolong li at March 27, 2015 08:00 AM

DataTau

QuantOverflow

Please give a step-by-step explanation on how to build a factor model

Factor models such as Fama-French or the other's that are partially summarized here work on the cross-section of asset returns.

How are the factors built, how are sensitivities/coefficients estimated? In this context Fama-MacBeth regressions are usually mentioned. How does this work intuitively? Could anyone give a step-by-step manual?

by Richard at March 27, 2015 07:51 AM

StackOverflow

Call 2 Futures in the same Action.async Scala Play

I'm a newby in Scala :( That's said, I'm fighting against Play framework Action.async and Future calls.

I'd like to call 2 Futures in the same Action and wait until they both compute to send their result in my view.

Here is the code :

    def showPageWithTags(category: String) = Action.async {
        val page = PageDAO.findOne(Json.obj("category" -> category)).map {
          case Some(page) => {
            page.content = page.content.byteArrayToString
          }
        }
        val futureTags = ArticleDAO.listTags
        Ok(views.html.web.pages.show(page, futureTags))
   }

with this defined functions :

def findOne(id: String): Future[Option[PageModel]]

def listTags: Future[List[String]] 

I get this errors :

[error]  found   : Some[A]
[error]  required: models.PageModel
[error]         case Some(page) => {
[error]              ^
.../...
[error]  found   : None.type
[error]  required: models.PageModel
[error]         case None => {
[error]              ^
.../...
[error]  found   : Option[Nothing]
[error]  required: scala.concurrent.Future[play.api.mvc.Result]
[error]       optionPage.map {
[error]                      ^
.../...
[error]  found   : scala.concurrent.Future[Unit]
[error]  required: models.PageModel
[error]         Ok(views.html.web.pages.show(optionPage, futureTags))
[error]                                      ^
.../...
[error]  found   : scala.concurrent.Future[List[String]]
[error]  required: List[String]
[error]         Ok(views.html.web.pages.show(optionPage, futureTags))
[error]                                                  ^

I've tried map, for/yield, foreach to deal with Option and Future but there is always one of this errors which remains.

Yet the one Future call was working good before I add the "Tag" functionality :

  def showPage(category: String) = Action.async {
    PageDAO.findOne(Json.obj("category" -> category)).map {
      case Some(page) => {
        page.content = page.content.byteArrayToString
        Ok(views.html.web.pages.show(page))
      }
      case None => {
        NotFound
      }
    }
  }

How can I call 2 Futures in the same action and wait until they both compute to pass them to my page view via Ok() ?

Many thanks to any clarifications !

by jeanjerome at March 27, 2015 07:51 AM

Lobsters

/r/compsci

/r/netsec

StackOverflow

Spark worker throwing ERROR SendingConnection: Exception while reading SendingConnection to ConnectionManagerId

I am trying to execute a simple app example code with spark. Executing the job using spark submit. spark-submit --class "SimpleJob" --master spark://:7077 target/scala-2.10/simple-project_2.10-1.0.jar

15/03/08 23:21:53 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/03/08 23:21:53 WARN LoadSnappy: Snappy native library not loaded
15/03/08 23:22:09 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
Lines with a: 21, Lines with b: 21

The job gives correct results but gives following errors below it:

15/03/08 23:22:28 ERROR SendingConnection: Exception while reading SendingConnection to ConnectionManagerId(<worker-host.domain.com>,53628)
java.nio.channels.ClosedChannelException
    at sun.nio.ch.SocketChannelImpl.ensureReadOpen(SocketChannelImpl.java:252)
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:295)
    at org.apache.spark.network.SendingConnection.read(Connection.scala:390)
    at org.apache.spark.network.ConnectionManager$$anon$6.run(ConnectionManager.scala:205)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
15/03/08 23:22:28 ERROR ConnectionManager: Corresponding SendingConnection to ConnectionManagerId(<worker-host.domain.com>,53628) not found
15/03/08 23:22:28 WARN ConnectionManager: All connections not cleaned up

Following is the spark-defaults.conf

spark.serializer                 org.apache.spark.serializer.KryoSerializer
spark.driver.memory              5g
spark.master                     spark://<master-ip>:7077
spark.eventLog.enabled           true
spark.executor.extraClassPath   $SPARK-HOME/spark-cassandra-connector/spark-cassandra-connector/target/scala-2.10/spark-cassandra-connector-assembly-1.2.0-SNAPSHOT.jar
spark.cassandra.connection.conf.factory com.datastax.spark.connector.cql.DefaultConnectionFactory
spark.cassandra.auth.conf.factory       com.datastax.spark.connector.cql.DefaultAuthConfFactory
spark.cassandra.query.retry.count       10

Following is the spark-env.sh

SPARK_LOCAL_IP=<master-ip in master worker-ip in workers>
SPARK_MASTER_HOST='<master-hostname>'
SPARK_MASTER_IP=<master-ip>
SPARK_MASTER_PORT=7077
SPARK_WORKER_CORES=2
SPARK_WORKER_MEMORY=2g
SPARK_WORKER_INSTANCES=4

by tarun tiwari at March 27, 2015 06:44 AM

which is best for college pass-out job DBA or Programming?

This is Vishnu Sharma. i m fresher and looking forward for my career but i'm confused which path to choose it will be in DBA or Programming.... :)

by Vishnu Sharma at March 27, 2015 06:40 AM

/r/netsec

CompsciOverflow

Potential method analysis for Insert and Extract-max on a Max heap data structure

Suppose that you do some sequence of operations on a max heap, in this case only Insert and Extract-max. Whenever the heap becomes of size N then what you do is you copy all the elements to a new heap of size 2N.

The goal is to come up with a potential function to analyze the amortized cost of Insert and Extract-max. More specifically for insert it should be $O(\log n)$ and for extract-max it should be $O(1)$

I understand the idea behind the potential method, that you store the potential energy for each object in your data structure that might be released in the future. But how exactly do you come up with these functions? The answer to the above problem is $\Phi = |2n-N|+\log n$ but I am not sure how to come up with it.

I tried searching online for some examples but I couldn't really find anything significant, every example was some kind of over simplified generalization of some trivial problem(here for instance).

by jsguy at March 27, 2015 06:11 AM

/r/netsec

TheoryOverflow

Proof for Max flow for a single path = Min of all cuts of Max edge capacity in a single cut [on hold]

Let G=(V,E) be an undirected graph with capacities $c_e$ on each edge e $\in$ E. Let s,t be two of its vertices. Let ${\bf P}$ be the set of s-t paths in G and ${\bf C}$ be the subsets of edges that are s-t cuts. Show that

$\max_{P \in {\bf P}} \min_{e \in P} c_e = \min_{C \in {\bf C}} \max_{e \in C} c_e$

I've tried examples, and understand intuitively, but do not know how to approach this.

by gwhiz at March 27, 2015 05:56 AM

/r/netsec

StackOverflow

freebsd pw adduser command and python file descriptors

I'm trying to write a system management script in Python 2.7 on FreeBSD and I'm stuck trying to programmatically set the user's password when adding them. I'm using the FreeBSD pw command which has a -h flag which accepts a file descriptor as an argument.

The route I was taking is using Python's subprocess module, but I seem to be getting stuck in that Python treats everything as strings and the pw -h option is expecting a fd (file descriptor) back.
The command I'm trying to run is:

/usr/sbin/pw useradd foobar2 -C /usr/local/etc/bsdmanage/etc/pw.conf -m -c "BSDmanage foobar2 user" -G foobar2-www -h

I'm doing this via:

objTempPassFile = open(strTempDir + 'foobar.txt', 'w+')
objTempPassFile.write(strTempPass)
objTempPassFile.seek(0)

listCmdArgs = shlex.split(strPwUserCmd)
processUser = subprocess.Popen(listCmdArgs,stdin=objTempPassFile.fileno(),stdout=subprocess.PIPE,stderr=subprocess.PIPE)
strOutPut, strErrorValue = processUser.communicate()

where strPwUserCmd is the above pw command and strTempPass is just a string.

I also tried passing the password string as an option to Popen.communicate() and changing stdin to stdin=subprocess.PIPE

I also tried using a StringIO object. However, passing that either gets errors about it not being a valid I/O object or the pw commands fails and doesn't see any arguments passed to the -h switch.

FreeBSD pw manpage

Any ideas? Thanks.

by Henrik Hudson at March 27, 2015 05:42 AM

/r/dependent_types

StackOverflow

Sending struct containing other struct as ZeroMQ message

I have a problem sending zmq message built from the pointer to struct, which contains other struct.

The server code:

#include <zmq.hpp>
#include <string>
#include <iostream>

using namespace zmq;
using namespace std;

struct structB{
    int a;
    string c;
};

struct structC{
    int z;
    struct structB b;
};

int main()
{
    context_t context(1);
    socket_t *socket = new socket_t(context,ZMQ_REP);
    socket->bind("tcp://*:5555");

    message_t *request = new message_t();   
    socket->recv(request);

    struct structB messageB; 
    messageB.a=0;
    messageB.c="aa";

    struct structC *messageC = new struct structC;
    messageC->z = 4;
    messageC->b = messageB;

    char *buffer = (char*)(messageC);
    message_t *reply = new message_t((void*)buffer,
                       +sizeof(struct structB)
                       +sizeof(struct structC)
                       ,0);
    socket->send(*reply);

    return 0;
}

Client code:

#include <zmq.hpp>
#include <iostream>
#include <string>

using namespace std;
using namespace zmq;

struct structB{
    int a;
    string c;
};

struct structC{
    int z;
    struct structB b;
};

int main()
{ 
    context_t context(1);
    socket_t *socket = new socket_t(context,ZMQ_REQ);
    socket->connect("tcp://*:5555");

    const char* buffer = "abc";
    message_t *request = new message_t((void*)buffer,sizeof(char*),0);
    socket->send(*request);

    message_t *reply = new message_t;
    socket->recv(reply);

    struct structC *messageC = new struct structC;
    messageC = static_cast<struct structC*>(reply->data());

cout<<messageC->b.a<<endl;//no crash here

    struct structB messageB = messageC->b;//Segmentation fault (core dumped)

    return 0;
}

This program crashes when I try to use string called "c" from structB. It doesn't matter if I try to print it, or assign whole structB as in above example.

Where is the problem? Should I create message_t *reply on server side in different way?

by user2923525 at March 27, 2015 05:31 AM

QuantOverflow

What is the difference between a benchmark yield curve, funding curve and a basis spread curve?

I am trying to understand why these curves are important, and what they are used for in the industry today (if not at all).

by Yodan at March 27, 2015 05:06 AM

StackOverflow

in Ansible, How can I set log file name dynamilcally

I'm currently developing ansible script to build and deploy java project.

so, I can set the log_path like below

log_path=/var/log/ansible.log

but, It is hard to look up build history. Is it possible to append datetime to log file name?

for example,

ansible.20150326145515.log

by Ickhyun Kwon at March 27, 2015 05:00 AM

Wondermark

#1009; Live and Let Art

This Classic Wondermark was originally published March 17, 2014!

by David Malki at March 27, 2015 05:00 AM

DataTau

CompsciOverflow

Algorithm to distribute items "evenly"

I'm searching for an algorithm to distribute values from a list so that the resulting list is as "balanced" or "evenly distributed" as possible (in quotes because I'm not sure these are the best ways to describe it... later I'll provide a way to measure if a result is better than other).

So, for the list:

[1, 1, 2, 2, 3, 3]

One of the best results, after re-distributing the values, is:

[1, 2, 3, 1, 2, 3]

There may be other results as good as this one, and of course this gets more complicated with a less uniform set of values.

This is how to measure if a result is better than other:

  1. Count the distances between each item and the next item with the same value.

  2. Calculate the standard deviation for that set of distances. A lower dispersion means a better result.

Observations:

  • When calculating a distance and the end of the list is reached without finding an item with the same value, we go back to the beginning of the list. So, at most, the same item will be found and the distance for that item will be the length of the list. This means that the list is cyclic;
  • A typical list has ~50 items with ~15 different values in varied quantities.

So:

  • For the result [1, 2, 3, 1, 2, 3], the distances are [3, 3, 3, 3, 3, 3], and the standard deviation is 0;
  • For the result [1, 1, 2, 2, 3, 3], the distances are [1, 5, 1, 5, 1, 5], and the standard deviation is 2;
  • Which makes the first result better than the second (lower deviation is better).

Given these definitions, I ask for a clue of which algorithms or strategies should I search for.

by moraes at March 27, 2015 04:40 AM

StackOverflow

Does copy and update for immutable record types in F# share or copy memory?

Does the copy and update procedure for immutable records in F# share or copy memory? Meaning, in the following code

type MyRecord = {
    X: int;
    Y: int;
    Z: int 
    }

let myRecord1 = { X = 1; Y = 2; Z = 3; }
let myRecord2 = { myRecord1 with Y = 100; Z = 2 }

do myRecord1 and myRecord2 share memory for the variable X? More generally, is there a good reference that denotes exactly which immutable/persistent data structures in F# actively share memory?

by wyer33 at March 27, 2015 04:13 AM

Scala regex extract domain from urls

I want to extract bell.com from these following input using Scala regex. I have tried a few variations without success.

"www.bell.com"
"bell.com"
"http://www.bell.com"
"https://www.bell.com"
"https://bell.com/about"
"https://www.bell.com?token=123"

This is my code but not working.

val pattern = """(?:([http|https]://)?)(?:(www\.)?)([A-Za-z0-9._%+-]+)[/]?(?:.*)""".r
url match {
  case pattern(domain) =>
    print(domain)
  case _ => print("not found!")
}

EDIT: My regex is wrong. Thanks to @Tabo. This is correct one.

(?:https?://)?(?:www\.)?([A-Za-z0-9._%+-]+)/?.*

by angelokh at March 27, 2015 04:12 AM

Simple Calculator using python3

I'm trying to make a simple calculator with only this types of code. It keep on giving me continues operations even though the operation should be provided/chosen by user. What should I do? :(

def calc():
    if mode == "1":
        print("Addition")
        main = add()
    elif mode == "2":
        print("Subtraction")
        main = sub()
    elif mode == "3":
        print("Multiplication")
        main = mult()
    elif mode == "4":
        print("Division")
        main = div()
        print("Quotient is equal to", main)
    elif mode == "5":
        print("Modulo")
        main = mod()
    elif mode == "6":
        print("Raise a number to an exponent")
        main = exp()
    elif mode == "7":
        print("Square root")
        main = sq()


def add():
    x = float(input("Enter first number: "))
    y = float(input("Enter second number: "))
    mad = x+y
    print("Sum is equal to", mad)
    return
def sub():
    x = float(input("Enter first number: "))
    y = float(input("Enter second number: "))
    mad = x-y
    print("Difference is equal to", mad)
    return
def mult():
    x = float(input("Enter first number: "))
    y = float(input("Enter second number: "))
    mad = x*y
    print("Product is equal to", mad)
    return
def div():
    x = float(input("Enter first number: "))
    y = float(input("Enter second number: "))
    mad = x/y
    print("Quotient is equal to", mad)
    return
def mod():
    x = float(input("Enter first number: "))
    y = float(input("Enter second number: "))
    mad = x%y
    print("Remainder is equal to", mad)
    return
def exp():
    x = float(input("Enter first number: "))
    y = float(input("Enter second number: "))
    mad = x**y
    print(x, "raised to", y, "is equal to", main)
    return
def sq():
    x = float(input("Enter first number: "))
    mad = x**0.5
    print("Square root of", x, "is", mad)
    return

print("1. Add")
print("2. Subtract")
print("3. Multiply")
print("4. Division")
print("5. Modulo")
print("6. Exponent")
print("7. Square root")
print("8. Exit")
mode = input("What do you want to do? ")

calc()
add()
sub()
mult()
div()
mod()
exp()
sq()

It keeps asking me for input numbers then continuously provides answers for all operations one after the other.

by lostsoul at March 27, 2015 04:06 AM

Halfbakery

XKCD

StackOverflow

Is it possible to implement a function that returns an n-tuple on the lambda calculus?

An n-tuple on the lambda calculus is usually defined as:

1-tuple:     λ a t . t a
1-tuple-fst: λ t . t (λ a . a)

2-tuple:     λ a b t . t a b
2-tuple-fst: λ t . t (λ a b . a)
2-tuple-snd: λ t . t (λ a b . b)

3-tuple:     λ a b c t . t a b c
3-tuple-fst: λ t . t (λ a b c . a)
3-tuple-snd: λ t . t (λ a b c . b)
3-tuple-trd: λ t . t (λ a b c . c)

... and so on.

My question is: is it possible to implement a function that receives a church number N and returns the corresponding N-tuple for any N? Also, is it possible to extend this function so it also returns the corresponding accessors? The algorithm can't use any form of recursion, including fixed-point combinators.

~

Edit: as requested, elaborating on what I've tried.

I want that function not to depend on recursion / fixed point combinators, so, the obvious way to do it would be using church-numbers for repetition. Said that, I have tried randomly testing many expressions, in order to learn how they grow. For example:

church_4 (λ a b c . a (b c))

Reduces to:

(λ a b c d e f . a ((((e d) c) b) a)))))

I've compared the reduction of many similar combinations church_4 (λ a b c . (a (b c))) to my desired results and noticed that I could implement the accessors as:

firstOf = (λ max n . (firstOf (sub max n) (firstOf n)))
access = (λ max idx t . (t (firstOf (sub max idx) (firstOf idx))))

Where sub is the subtraction operator and access church_5 church_2 means accessing the 3rd (because 2 is the 3rd natural) element of a 6-tuple.

Now, on the tuples. Notice that the problem is finding a term my_term such that, for example:

church_3 my_term

had the following normal form:

(λ a b c d t . ((((t a) b) c) d))

As you can see, I've almost found it, since:

church_3 (λ a b c . a (b c)) (λ a . a)

Reduces to:

(λ a b c d . (((a b) c) d))

Which is almost the result I need, except thatt is missing.

That is what I've tried so far.

by Viclib at March 27, 2015 03:57 AM

How to enforce scala macro annotation constraints on annotated classes?

I am trying to implement a "simple" util library that puts annotated scalatest suite in a Suites instance to give them a certain running context.

trait MyContext extends BeforeAndAfterAll {
  //declarations of values
  override def beforeAll() = {/* run various init procedures */}
  override def afterAll() = {/* tear everything down */}
}

That part works and I can use it if I code the Suites instance myself. What I would like to code is scala annotations with a macro that will take all annotated org.scalatest.Suite subtypes and generate the Suites class as in:

class testThisInContext extends StaticAnnotation{ /* ...  */}

@testThisInContext
class TestOne extends WordSpec {}

@testThisInContext
class TestTwo extends FlatSuite {}

And would then generated:

class InContextSuites extends Suites(new TestOne, new TestTwo) with MyContext {}

and also modify the classes by adding the @org.scalatest.DoNotDiscover annotation to them (to avoid execution out of context).

I need a way to interrupt the application of the macro (and throw an error) when the annotated class is not a subclass of Suite (which would make the generated class not compile).

I also have not figured out how to type check annotations in the modifiers instance of a ClassDef (in order to add an annotation if needed).

by ɭɘ ɖɵʊɒɼɖ 江戸 at March 27, 2015 03:41 AM

DataTau

StackOverflow

spray-json JsonFormat case classes

I'm facing this problem trying to implement a JsonFormat object for a case class that is Generic. This is my class:

case class SimpleQuery[T](field : String, op : Operator, value : T) extends Query{
  def getType = ????
}

I'm trying to use the Format that the github page of spray json hints like this :

implicit def SimpleQueryJsonFormat[A <: JsonFormat] = jsonFormat4(SimpleQuery.apply[A])

But I get this compiler error

trait JsonFormat takes type parameters

The example from spray-json github page is the following:

case class NamedList[A](name: String, items: List[A])

object MyJsonProtocol extends DefaultJsonProtocol {
  implicit def namedListFormat[A :JsonFormat] = jsonFormat2(NamedList.apply[A])
}

That seems really similar to mine.

I'll also open an issue in the github page.

Thank you in advance

by tmnd91 at March 27, 2015 02:52 AM

/r/compsci

Who's interested in a study group for Nand2Tetris, a self-paced course that teaches you to build a computer from the ground up?

I'm trying to organize a weekly study group for the self-paced course Nand2Tetris, which teaches you to build and program a computer step by step from logic gates to higher level programming (NAND gates to Tetris). I've linked their website below for those who aren't familiar with it, it's really cool, it's been taught at such prestigious institutions as MIT and Stanford, and you should check it out. If you're interested in the study group, leave a comment in my post in /r/NandToTetris, which is also linked below. Thanks!

http://www.nand2tetris.org/

http://www.reddit.com/r/NandToTetris/comments/2zv75o/study_group/

Edit: Here's the group discussion thead.

http://www.reddit.com/r/NandToTetris/comments/30ijtj/study_group_discussion_thread/

submitted by kodomo
[link] [4 comments]

March 27, 2015 02:23 AM

/r/emacs

The kill ring and the clipboard

I've never been able to get the hang of the kill ring, particularly how it relates to the clipboard (I'm using the GUI on OS X). I'm OK with cycling through the kill ring, but here's the scenario that comes up again and again: I copy something outside of emacs to the clipboard, I switch to emacs, remove the text that I was going to replace with the contents of the clipboard, and then paste the same text that I just removed because removing the text replaced the original clipboard contents. Do you just get used to this behavior and always remember to remove the text before copying, or is there some way to configure it? Ideally I'd like to keep the emacs kill-ring and the OS X clipboard completely separate, so I can Cmd-C outside of emacs, do stuff in emacs including killing and yanking and still Cmd-V the text I had originally copied. Or maybe I don't really understand the kill-ring.

submitted by harumphfrog
[link] [34 comments]

March 27, 2015 02:22 AM

CompsciOverflow

I can not see why MSD radix sort is theoretically as efficient as LSD radix sort

Here is the pseudocode for LSD radix sort from http://www.cs.princeton.edu/~rs/AlgsDS07/18RadixSort.pdf

   public static void lsd(String[] a)
   {
        int N = a.length;
        int W = a[0].length;
        for (int d = W-1; d >=0; d--)
        {
             int[] count = new int[R];
             for(int i = 0; i < N; i++)
                   count[a[i].charAt(d) + 1]++;
             for(int k = 1; k < 256; k++)
                   count[k] += count[k-1];
             for(int i = 0; i < N; i++)
                   temp[count[a[i].charAt(d)]++] = a[i];
             for(int i = 0; i < N; i++)
                   a[i] = temp[i];
        }
   }

I guess that by 256 they mean $R$. We have a simple for loop, so the time is $\Theta(W(R+N))$ The reason why I use $\Theta$ is because the bound is tight, this is how many operations you will be doing no matter what, so it's worst case and best case performance at the same time. The space requirements are $\Theta(N+R)$

The following is the pseudocode of MSD:

    public static void msd(String[] a)
    { 
        msd(a, 0, a.length, 0); 
    }

    private static void msd(String[] a, int lo, int hi, int d)
    {
        if (hi <= lo+1) return;
        int[] count = new int[256+1];
        for (int i = 0; i < N; i++)
             count[a[i].charAt(d)+1]++;
        for (int k = 1; k < 256; k++)
             count[k] += count[k-1];
        for (int i = 0; i < N; i++)
             temp[count[a[i].charAt(d)]++] = a[i];
        for (int i = 0; i < N; i++)
             a[i] = temp[i];
        for (int i = 0; i < 255; i++)
             msd(a, l+count[i], l+count[i+1], d+1);
   }

Since the recursion we will stop recursing after $W$ levels in the worst case, the space complexity becomes $\Theta(N + WR)$

however what about time? Just from space itself, it seems like MSD is actually pretty bad. What can you say about the time? To me it looks like you will have to spend $R$ operations for every node in your recursion tree and also $N$ operations per every level. So the time is $\Theta(NW + R*amountOfNodes)$.

I do not see how and why MSD would ever be preferred in practice given these time bounds. They are pretty horrible, but I am still not sure whether I get the time bounds in a correct way, for instance I have no clue how big is amountOfNodes..

Keep in mind that I would like the analysis to be in terms of R and N. I know that in practice R is usually small so it's just a constant, but I am not interested in this scenario, since I would like to see how much does R affect the performance as well.

I asked a kind of similar question yesterday but I did receive any replies, I think the reason for that was because my question was not well structured, so I could remove this question if the admins wanted. Thank you in advance!

by jsguy at March 27, 2015 01:44 AM

/r/clojure

arXiv Networking and Internet Architecture

ICONA: Inter Cluster ONOS Network Application. (arXiv:1503.07798v1 [cs.NI])

Several Network Operating Systems (NOS) have been proposed in the last few years for Software Defined Networks; however, a few of them are currently offering the resiliency, scalability and high availability required for production environments. Open Networking Operating System (ONOS) is an open source NOS, designed to be reliable and to scale up to thousands of managed devices. It supports multiple concurrent instances (a cluster of controllers) with distributed data stores. A tight requirement of ONOS is that all instances must be close enough to have negligible communication delays, which means they are typically installed within a single datacenter or a LAN network. However in certain wide area network scenarios, this constraint may limit the speed of responsiveness of the controller toward network events like failures or congested links, an important requirement from the point of view of a Service Provider. This paper presents ICONA, a tool developed on top of ONOS and designed in order to extend ONOS capability in network scenarios where there are stringent requirements in term of control plane responsiveness. In particular the paper describes the architecture behind ICONA and provides some initial evaluation obtained on a preliminary version of the tool.

by <a href="http://arxiv.org/find/cs/1/au:+Gerola_M/0/1/0/all/0/1">M. Gerola</a>, <a href="http://arxiv.org/find/cs/1/au:+Santuari_M/0/1/0/all/0/1">M. Santuari</a>, <a href="http://arxiv.org/find/cs/1/au:+Salvadori_E/0/1/0/all/0/1">E. Salvadori</a>, <a href="http://arxiv.org/find/cs/1/au:+Salsano_S/0/1/0/all/0/1">S. Salsano</a>, <a href="http://arxiv.org/find/cs/1/au:+Campanella_M/0/1/0/all/0/1">M. Campanella</a>, <a href="http://arxiv.org/find/cs/1/au:+Ventre_P/0/1/0/all/0/1">P. L. Ventre</a>, <a href="http://arxiv.org/find/cs/1/au:+Al_Shabibi_A/0/1/0/all/0/1">A. Al-Shabibi</a>, <a href="http://arxiv.org/find/cs/1/au:+Snow_W/0/1/0/all/0/1">W. Snow</a> at March 27, 2015 01:30 AM

Incremental Computation with Names. (arXiv:1503.07792v1 [cs.PL])

Over the past thirty years, there has been significant progress in developing general-purpose, language-based approaches to incremental computation, which aims to efficiently update the result of a computation when an input is changed. A key design challenge in such approaches is how to provide efficient incremental support for a broad range of programs. In this paper, we argue that first-class names are a critical linguistic feature for efficient incremental computation. Names identify computations to be reused across differing runs of a program, and making them first class gives programmers a high level of control over reuse. We demonstrate the benefits of names by presenting Nominal Adapton, an ML-like language for incremental computation with names. We describe how to use Nominal Adapton to efficiently incrementalize several standard programming patterns---including maps, folds, and unfolds---and show how to build efficient, incremental probabilistic trees and tries. Since Nominal Adapton's implementation is subtle, we formalize it as a core calculus and prove it is from-scratch consistent, meaning it always produces the same answer as simply re-running the computation. Finally, we demonstrate that Nominal Adapton can provide large speedups over both from-scratch computation and Adapton, a previous state-of-the-art incremental system.

by <a href="http://arxiv.org/find/cs/1/au:+Hammer_M/0/1/0/all/0/1">Matthew A. Hammer</a>, <a href="http://arxiv.org/find/cs/1/au:+Dunfield_J/0/1/0/all/0/1">Joshua Dunfield</a>, <a href="http://arxiv.org/find/cs/1/au:+Headley_K/0/1/0/all/0/1">Kyle Headley</a>, <a href="http://arxiv.org/find/cs/1/au:+Labich_N/0/1/0/all/0/1">Nicholas Labich</a>, <a href="http://arxiv.org/find/cs/1/au:+Foster_J/0/1/0/all/0/1">Jeffrey S. Foster</a>, <a href="http://arxiv.org/find/cs/1/au:+Hicks_M/0/1/0/all/0/1">Michael Hicks</a>, <a href="http://arxiv.org/find/cs/1/au:+Horn_D/0/1/0/all/0/1">David Van Horn</a> at March 27, 2015 01:30 AM

NeuCoin: the First Secure, Cost-efficient and Decentralized Cryptocurrency. (arXiv:1503.07768v1 [cs.CR])

NeuCoin is a decentralized peer-to-peer cryptocurrency derived from Sunny King's Peercoin, which itself was derived from Satoshi Nakamoto's Bitcoin. As with Peercoin, proof-of-stake replaces proof-of-work as NeuCoin's security model, effectively replacing the operating costs of Bitcoin miners (electricity, computers) with the capital costs of holding the currency. Proof-of-stake also avoids proof-of-work's inherent tendency towards centralization resulting from competition for coinbase rewards among miners based on lowest cost electricity and hash power.

NeuCoin increases security relative to Peercoin and other existing proof-of-stake currencies in numerous ways, including: (1) incentivizing nodes to continuously stake coins over time through substantially higher mining rewards and lower minimum stake age; (2) abandoning the use of coin age in the mining formula; (3) causing the stake modifier parameter to change over time for each stake; and (4) utilizing a client that punishes nodes that attempt to mine on multiple branches with duplicate stakes.

This paper demonstrates how NeuCoin's proof-of-stake implementation addresses all commonly raised "nothing at stake" objections to generic proof-of-stake systems. It also reviews many of the flaws of proof-of-work designs to highlight the potential for an alternate cryptocurrency that solves these flaws.

by <a href="http://arxiv.org/find/cs/1/au:+Davarpanah_K/0/1/0/all/0/1">Kourosh Davarpanah</a>, <a href="http://arxiv.org/find/cs/1/au:+Kaufman_D/0/1/0/all/0/1">Dan Kaufman</a>, <a href="http://arxiv.org/find/cs/1/au:+Pubellier_O/0/1/0/all/0/1">Ophelie Pubellier</a> at March 27, 2015 01:30 AM

Large-scale Biological Meta-Database Management. (arXiv:1503.07759v1 [cs.DC])

Up-to-date meta-databases are vital for the analysis of biological data. The current exponential increase in biological data is also exponentially increasing meta-database sizes. Large-scale meta-database management is therefore an important challenge for platforms providing services for biological data analysis. In particular, there is a need either to run an analysis with a particular version of a meta-database, or to rerun an analysis with an updated meta-database. We present our GeStore approach for biological meta-database management. It provides efficient storage and runtime generation of specific meta-database versions, and efficient incremental updates for biological data analysis tools. The approach is transparent to the tools, and we provide a framework that makes it easy to integrate GeStore with biological data analysis frameworks. We present the GeStore system, as well as an evaluation of the performance characteristics of the system, and an evaluation of the benefits for a biological data analysis workflow.

by <a href="http://arxiv.org/find/cs/1/au:+Pedersen_E/0/1/0/all/0/1">Edvard Pedersen</a>, <a href="http://arxiv.org/find/cs/1/au:+Bongo_L/0/1/0/all/0/1">Lars Ailo Bongo</a> at March 27, 2015 01:30 AM

ASPeRiX, a First Order Forward Chaining Approach for Answer Set Computing. (arXiv:1503.07717v1 [cs.LO])

The natural way to use Answer Set Programming (ASP) to represent knowledge in Artificial Intelligence or to solve a combinatorial problem is to elaborate a first order logic program with default negation. In a preliminary step this program with variables is translated in an equivalent propositional one by a first tool: the grounder. Then, the propositional program is given to a second tool: the solver. This last one computes (if they exist) one or many answer sets (stable models) of the program, each answer set encoding one solution of the initial problem. Until today, almost all ASP systems apply this two steps computation. In this article, the project ASPeRiX is presented as a first order forward chaining approach for Answer Set Computing. This project was amongst the first to introduce an approach of answer set computing that escapes the preliminary phase of rule instantiation by integrating it in the search process. The methodology applies a forward chaining of first order rules that are grounded on the fly by means of previously produced atoms. Theoretical foundations of the approach are presented, the main algorithms of the ASP solver ASPeRiX are detailed and some experiments and comparisons with existing systems are provided.

by <a href="http://arxiv.org/find/cs/1/au:+Lefevre_C/0/1/0/all/0/1">Claire Lef&#xe8;vre</a>, <a href="http://arxiv.org/find/cs/1/au:+Beatrix_C/0/1/0/all/0/1">Christopher B&#xe9;atrix</a>, <a href="http://arxiv.org/find/cs/1/au:+Stephan_I/0/1/0/all/0/1">Igor St&#xe9;phan</a>, <a href="http://arxiv.org/find/cs/1/au:+Garcia_L/0/1/0/all/0/1">Laurent Garcia</a> at March 27, 2015 01:30 AM

Sensitivity and Computational Complexity in Financial Networks. (arXiv:1503.07676v1 [q-fin.CP])

Modern financial networks exhibit a high degree of interconnectedness and determining the causes of instability and contagion in financial networks is necessary to inform policy and avoid future financial collapse. In the American Economic Review, Elliott, Golub and Jackson proposed a simple model for capturing the dynamics of complex financial networks. In Elliott, Golub and Jackson's model, each institution in the network can buy underlying assets or percentage shares in other institutions (cross-holdings) and if any institution's value drops below a critical threshold value, its value suffers an additional failure cost.

This work shows that even in simple model put forward by Elliott, Golub and Jackson there are fundamental barriers to understanding the risks that are inherent in a network. First, if institutions are not required to maintain a minimum amount of self-holdings, an $\epsilon$ change in investments by a single institution can have an arbitrarily magnified influence on the net worth of the institutions in the system. This sensitivity result shows that if institutions have small self-holdings, then estimating the market value of an institution requires almost perfect information about every cross-holding in the system. Second, we show that even if a regulator has complete information about all cross-holdings in the system, it may be computationally intractable to even estimate the number of failures that could be caused by an arbitrarily small shock to the system. Together, these results show that any uncertainty in the cross-holdings or values of the underlying assets can be amplified by the network to arbitrarily large uncertainty in the valuations of institutions in the network.

by <a href="http://arxiv.org/find/q-fin/1/au:+Hemenway_B/0/1/0/all/0/1">Brett Hemenway</a>, <a href="http://arxiv.org/find/q-fin/1/au:+Khanna_S/0/1/0/all/0/1">Sanjeev Khanna</a> at March 27, 2015 01:30 AM

Loo.py: From Fortran to performance via transformation and substitution rules. (arXiv:1503.07659v1 [cs.PL])

A large amount of numerically-oriented code is written and is being written in legacy languages. Much of this code could, in principle, make good use of data-parallel throughput-oriented computer architectures. Loo.py, a transformation-based programming system targeted at GPUs and general data-parallel architectures, provides a mechanism for user-controlled transformation of array programs. This transformation capability is designed to not just apply to programs written specifically for Loo.py, but also those imported from other languages such as Fortran. It eases the trade-off between achieving high performance, portability, and programmability by allowing the user to apply a large and growing family of transformations to an input program. These transformations are expressed in and used from Python and may be applied from a variety of settings, including a pragma-like manner from other languages.

by <a href="http://arxiv.org/find/cs/1/au:+Klockner_A/0/1/0/all/0/1">Andreas Kl&#xf6;ckner</a> at March 27, 2015 01:30 AM

Automated Verification Of Role-Based Access Control Policies Constraints Using Prover9. (arXiv:1503.07645v1 [cs.CR])

Access control policies are used to restrict access to sensitive records for authorized users only. One approach for specifying policies is using role based access control (RBAC) where authorization is given to roles instead of users. Users are assigned to roles such that each user can access all the records that are allowed to his/her role. RBAC has a great interest because of its flexibility. One issue in RBAC is dealing with constraints. Usually, policies should satisfy pre-defined constraints as for example separation of duty (SOD) which states that users are not allowed to play two conflicting roles. Verifying the satisfiability of constraints based on policies is time consuming and may lead to errors. Therefore, an automated verification is essential. In this paper, we propose a theory for specifying policies and constraints in first order logic. Furthermore, we present a comprehensive list of constraints. We identity constraints based on the relation between users and roles, between roles and permission on records, between users and permission on records, and between users, roles, and permission on records. Then, we use a general purpose theorem prover tool called Prover9 for proving the satisfaction of constraints.

by <a href="http://arxiv.org/find/cs/1/au:+Sabri_K/0/1/0/all/0/1">Khair Eddin Sabri</a> at March 27, 2015 01:30 AM

A Closed-Loop UL Power Control Scheme for Interference Mitigation in Dynamic TD-LTE Systems. (arXiv:1503.07640v1 [cs.NI])

The TD-LTE system is envisaged to adopt dynamic time division duplexing (TDD) transmissions for small cells to adapt their communication service to the fast variation of downlink (DL) and uplink (UL) traffic demands. However, different DL/UL directions for the same subframe in adjacent cells will result in new destructive interference components, i.e., eNB-to-eNB and UE-to-UE, with levels that can significantly differ from one subframe to another. In this paper, a feasible UL power control mechanism is proposed to manage eNB-to-eNB interference, where different UL power control parameters are set based on different interference level. We consider the geometric location information and the subframe set selection process about adjacent eNBs when the interference level is estimated. The performance of the proposed scheme is evaluated through system level simulations and it is shown that the scheme can achieve preferable improvement in terms of UL average and 5%-ile packet throughputs compared with the original scheme without power control. Also, the UE-to-UE interference is not worse when the UE transmit power become higher.

by <a href="http://arxiv.org/find/cs/1/au:+Chen_Q/0/1/0/all/0/1">Qinqin Chen</a>, <a href="http://arxiv.org/find/cs/1/au:+Zhao_H/0/1/0/all/0/1">Hui Zhao</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_L/0/1/0/all/0/1">Lin Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Long_H/0/1/0/all/0/1">Hang Long</a>, <a href="http://arxiv.org/find/cs/1/au:+Wang_J/0/1/0/all/0/1">Jianquan Wang</a>, <a href="http://arxiv.org/find/cs/1/au:+Hou_X/0/1/0/all/0/1">Xiaoyue Hou</a> at March 27, 2015 01:30 AM

Indoor Localization Algorithm For Smartphones. (arXiv:1503.07628v1 [cs.NI])

Increasing sources of sensor measurements and prior knowledge have become available for indoor localization on smartphones. How to effectively utilize these sources for enhancing localization accuracy is an important yet challenging problem. In this paper, we present an area state-aided localization algorithm that exploits various sources of information. Specifically, we introduce the concept of area state to indicate the area where the user is on an indoor map. The position of the user is then estimated using inertial measurement unit (IMU) measurements with the aid of area states. The area states are in turn updated based on the position estimates. To avoid accumulated errors of IMU measurements, our algorithm uses WiFi received signal strength indicator (RSSI) to indicate the vicinity of the user to the routers. The experiment results show that our system can achieve satisfactory localization accuracy in a typical indoor environment.

by <a href="http://arxiv.org/find/cs/1/au:+Zhang_K/0/1/0/all/0/1">Kaiqing Zhang</a>, <a href="http://arxiv.org/find/cs/1/au:+Hu_H/0/1/0/all/0/1">Hong Hu</a>, <a href="http://arxiv.org/find/cs/1/au:+Dai_W/0/1/0/all/0/1">Wenhan Dai</a>, <a href="http://arxiv.org/find/cs/1/au:+Shen_Y/0/1/0/all/0/1">Yuan Shen</a>, <a href="http://arxiv.org/find/cs/1/au:+Win_M/0/1/0/all/0/1">Moe Z. Win</a> at March 27, 2015 01:30 AM

Simultaneous Bidirectional Link Selection in Full Duplex MIMO Systems. (arXiv:1503.07604v1 [cs.IT])

In this paper, we consider a point to point full duplex (FD) MIMO communication system. We assume that each node is equipped with an arbitrary number of antennas which can be used for transmission or reception. With FD radios, bidirectional information exchange between two nodes can be achieved at the same time. In this paper we design bidirectional link selection schemes by selecting a pair of transmit and receive antenna at both ends for communications in each direction to maximize the weighted sum rate or minimize the weighted sum symbol error rate (SER). The optimal selection schemes require exhaustive search, so they are highly complex. To tackle this problem, we propose a Serial-Max selection algorithm, which approaches the exhaustive search methods with much lower complexity. In the Serial-Max method, the antenna pairs with maximum "obtainable SINR" at both ends are selected in a two-step serial way. The performance of the proposed Serial-Max method is analyzed, and the closed-form expressions of the average weighted sum rate and the weighted sum SER are derived. The analysis is validated by simulations. Both analytical and simulation results show that as the number of antennas increases, the Serial-Max method approaches the performance of the exhaustive-search schemes in terms of sum rate and sum SER.

by <a href="http://arxiv.org/find/cs/1/au:+Zhou_M/0/1/0/all/0/1">Mingxin Zhou</a>, <a href="http://arxiv.org/find/cs/1/au:+Song_L/0/1/0/all/0/1">Lingyang Song</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_Y/0/1/0/all/0/1">Yonghui Li</a>, <a href="http://arxiv.org/find/cs/1/au:+Li_X/0/1/0/all/0/1">Xuelong Li</a> at March 27, 2015 01:30 AM

A Low-throughput Wavelet-based Steganography Audio Scheme. (arXiv:1503.07551v1 [cs.MM])

This paper presents the preliminary of a novel scheme of steganography, and introduces the idea of combining two secret keys in the operation. The first secret key encrypts the text using a standard cryptographic scheme (e.g. IDEA, SAFER+, etc.) prior to the wavelet audio decomposition. The way in which the cipher text is embedded in the file requires another key, namely a stego-key, which is associated with features of the audio wavelet analysis.

by <a href="http://arxiv.org/find/cs/1/au:+Carrion_P/0/1/0/all/0/1">P. Carrion</a>, <a href="http://arxiv.org/find/cs/1/au:+Oliveira_H/0/1/0/all/0/1">H.M. de Oliveira</a>, <a href="http://arxiv.org/find/cs/1/au:+Souza_R/0/1/0/all/0/1">R.M. Campello de Souza</a> at March 27, 2015 01:30 AM

What's in your wallet?!. (arXiv:1306.2060v1 [math.HO] CROSS LISTED)

We use Markov chains and numerical linear algebra - and several CPU hours - to determine the most likely set of coins in your wallet under reasonable spending assumptions. We also compute a number of additional statistics. In particular, the expected number of coins carried by a person in the United States in our model is 10.

by <a href="http://arxiv.org/find/math/1/au:+Pudwell_L/0/1/0/all/0/1">Lara Pudwell</a>, <a href="http://arxiv.org/find/math/1/au:+Rowland_E/0/1/0/all/0/1">Eric Rowland</a> at March 27, 2015 01:30 AM

A polynomial time approximation scheme for computing the supremum of Gaussian processes. (arXiv:1202.4970v2 [cs.DS] UPDATED)

We give a polynomial time approximation scheme (PTAS) for computing the supremum of a Gaussian process. That is, given a finite set of vectors $V\subseteq\mathbb{R}^d$, we compute a $(1+\varepsilon)$-factor approximation to $\mathop {\mathbb{E}}_{X\leftarrow\mathcal{N}^d}[\sup_{v\in V}|\langle v,X\rangle|]$ deterministically in time $\operatorname {poly}(d)\cdot|V|^{O_{\varepsilon}(1)}$. Previously, only a constant factor deterministic polynomial time approximation algorithm was known due to the work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471]. This answers an open question of Lee (2010) and Ding [Ann. Probab. 42 (2014) 464-496]. The study of supremum of Gaussian processes is of considerable importance in probability with applications in functional analysis, convex geometry, and in light of the recent breakthrough work of Ding, Lee and Peres [Ann. of Math. (2) 175 (2012) 1409-1471], to random walks on finite graphs. As such our result could be of use elsewhere. In particular, combining with the work of Ding [Ann. Probab. 42 (2014) 464-496], our result yields a PTAS for computing the cover time of bounded-degree graphs. Previously, such algorithms were known only for trees. Along the way, we also give an explicit oblivious estimator for semi-norms in Gaussian space with optimal query complexity. Our algorithm and its analysis are elementary in nature, using two classical comparison inequalities, Slepian's lemma and Kanter's lemma.

by <a href="http://arxiv.org/find/cs/1/au:+Meka_R/0/1/0/all/0/1">Raghu Meka</a> at March 27, 2015 01:30 AM

/r/compsci

StackOverflow

Play unit test to upload multipart form with file (Scala): FileNotFoundException?

Ok, I'm going crazy on this one. I've written a unit test to test out our web site (in Play and Scala). The test should submit a file via the file input on the form.

I'm using WebDriver to run the test.

It seems to be instrumented just fine... but, whenever I submit the form, the logs show:

[debug] application - Uploading ... Abundance_800.png
[debug] application - /Users/zbeckman/Projects/Server/project/server/public/images/bundled_sentiments/Abundance_800.png
[error] application - /var/folders/fr/f2_wx1316h3f80cdvtcwq6c00000gn/T/Users/zbeckman/Projects/Server/project/server/public/images/bundled_sentiments/Abundance_800.png (No such file or directory)
java.io.FileNotFoundException: /var/folders/fr/f2_wx1316h3f80cdvtcwq6c00000gn/T/Users/zbeckman/Projects/Server/project/server/public/images/bundled_sentiments/Abundance_800.png (No such file or directory)

Obviously, my attempt to directly access the file in the public folder isn't allowed, but I can't figure out how to get around this.

Here's the code:

"upload a number of sentiments to the server" in new WithBrowser  {
    browser.goTo("/index")

    val pathname: String = "./public/images/bundled_sentiments/"
    val images = new java.io.File(pathname).listFiles.filter(_.getName.endsWith(".png"))
    var imageName = ""

    Logger.debug(s"Found $images. in " + pathname + " for sentiment image sources...")

    for (file <- images) {
        imageName = file.getName.replace("_800", "").replace(".png", "").replace("_", " ")

        Logger.debug("Uploading ... " + file.toString.split("/").last)

        webDriver.findElement(By.id("image")).sendKeys(file.getCanonicalPath)
        browser.fill("#nametx").`with`(imageName)
        browser.submit("#submit")
        browser.await().atMost(10, TimeUnit.SECONDS).untilPage().isLoaded

        assert(browser.find("alert").contains("Image uploaded successfully"))
    }
}

So, the question is: Is there any way to achieve this using WebDriver, or am I just not going to be able to pass a reference to these files? (How about copying the files to a location that the WebDriver would have access to them?)

by Zac at March 27, 2015 12:59 AM

/r/netsec

Planet Theory

Online Dictionary Matching with One Gap

Authors: Amihood Amir, Tsvi Kopelowitz, Avivit Levy, Seth Pettie, Ely Porat, B. Riva Shalom
Download: PDF
Abstract: The Online Dictionary Matching with Gaps Problem (DMG), is the following. Preprocess a dictionary $D$ of $d$ gapped patterns $P_1,\ldots,P_d$ such that given a query online text $T$ presented one character at a time, each time a character arrives we must report the subset of patterns that have a match ending at this character. A gapped pattern $P_i$ is of the form $P_{i,1}\{\alpha_i,\beta_i\}P_{i,2}$, where $P_{i,1},P_{i,2}\in \Sigma^*$ and $\{\alpha_i,\beta_i\}$ matches any substring with length between $\alpha_i$ and $\beta_i$. Finding efficient solutions for DMG has proven to be a difficult algorithmic challenge. Little progress has been made on this problem and to date, no truly efficient solutions are known. In this paper, we give a new DMG algorithm and provide compelling evidence that no substantially faster algorithm exists, based on the Integer3SUM conjecture.

March 27, 2015 12:42 AM

Router-level community structure of the Internet Autonomous Systems

Authors: Mariano G. Beiró, Sebastián P. Grynberg, J. Ignacio Alvarez-Hamelin
Download: PDF
Abstract: The Internet is composed of routing devices connected between them and organized into independent administrative entities: the Autonomous Systems. The existence of different types of Autonomous Systems (like large connectivity providers, Internet Service Providers or universities) together with geographical and economical constraints, turns the Internet into a complex modular and hierarchical network. This organization is reflected in many properties of the Internet topology, like its high degree of clustering and its robustness.

In this work, we study the modular structure of the Internet router-level graph in order to assess to what extent the Autonomous Systems satisfy some of the known notions of community structure. We show that the modular structure of the Internet is much richer than what can be captured by the current community detection methods, which are severely affected by resolution limits and by the heterogeneity of the Autonomous Systems. Here we overcome this issue by using a multiresolution detection algorithm combined with a small sample of nodes. We also discuss recent work on community structure in the light of our results.

March 27, 2015 12:41 AM

Analyzing Adaptive Cache Replacement Strategies

Authors: Leo Shao, Mario E. Consuegra, Raju Rangaswami, Giri Narasimhan
Download: PDF
Abstract: Adaptive Replacement Cache (ARC) and Clock with Adaptive Replacement (CAR) are state-of-the-art "adaptive" cache replacement algorithms invented to improve on the shortcomings of classical cache replacement policies such as LRU and Clock. Both ARC and CAR have been shown to outperform their classical and popular counterparts in practice. However, for over a decade, no theoretical proof of the competitiveness of ARC and CAR is known. We prove that for a cache of size N, (a) ARC is 4N-competitive, and (b) CAR is 21N-competitive, thus proving that no "pathological" worst-case request sequence exists that could make them perform much worse than LRU.

March 27, 2015 12:41 AM

StackOverflow

How how to implement a dynamic group by in scala?

Suppose I have a List[Map[String, String]] that represents a table in a database, and a List[String] that represents a list of column names. I'd like to implement the equivalent of a group by clause in SQL query:

def fun(table:List[Map[String, String]], keys:List[String]): List[List[Map[String, String]]

For example:

val table = List(
    Map("name"->"jade", "job"->"driver", "sex"->"male"),
    Map("name"->"mike", "job"->"police", "sex"->"female"),
    Map("name"->"jane", "job"->"clerk", "sex"->"female"),
    Map("name"->"smith", "job"->"driver", "sex"->"male")
)

val keys = List("job", "sex")

And then fun(table,keys) should be:

List(
    List(
        Map("name"->"jade", "job"->"driver", "sex"->"male"),
        Map("name"->"smith", "job"->"driver", "sex"->"male")
    ),
    List(Map("name"->"mike", "job"->"police", "sex"->"female")),
    List(Map("name"->"jane", "job"->"clerk", "sex"->"female"))
)

by Jade Tang at March 27, 2015 12:41 AM

Planet Theory

Sensitivity versus Certificate Complexity of Boolean Functions

Authors: Andris Ambainis, Krišjānis Prūsis, Jevgēnijs Vihrovs
Download: PDF
Abstract: Sensitivity, block sensitivity and certificate complexity are basic complexity measures of Boolean functions. The famous sensitivity conjecture claims that sensitivity is polynomially related to block sensitivity. However, it has been notoriously hard to obtain even exponential bounds. Since block sensitivity is known to be polynomially related to certificate complexity, an equivalent of proving this conjecture would be showing that certificate complexity is polynomially related to sensitivity. Previously, it has been shown that $bs(f) \leq C(f) \leq 2^{s(f)-1} s(f) - (s(f)-1)$. In this work, we give a better upper bound of $bs(f) \leq C(f) \leq \max\left(2^{s(f)-1}\left(s(f)-\frac 1 3\right), s(f)\right)$ using a recent theorem limiting the structure of function graphs. We also examine relations between these measures for functions with small 1-sensitivity $s_1(f)$ and arbitrary 0-sensitivity $s_0(f)$.

March 27, 2015 12:41 AM

Log-concavity and lower bounds for arithmetic circuits

Authors: Ignacio García-Marco, Pascal Koiran, Sébastien Tavenas
Download: PDF
Abstract: One question that we investigate in this paper is, how can we build log-concave polynomials using sparse polynomials as building blocks? More precisely, let $f = \sum\_{i = 0}^d a\_i X^i \in \mathbb{R}^+[X]$ be a polynomial satisfying the log-concavity condition $a\_i^2 \textgreater{} \tau a\_{i-1}a\_{i+1}$ for every $i \in \{1,\ldots,d-1\},$ where $\tau \textgreater{} 0$. Whenever $f$ can be written under the form $f = \sum\_{i = 1}^k \prod\_{j = 1}^m f\_{i,j}$ where the polynomials $f\_{i,j}$ have at most $t$ monomials, it is clear that $d \leq k t^m$. Assuming that the $f\_{i,j}$ have only non-negative coefficients, we improve this degree bound to $d = \mathcal O(k m^{2/3} t^{2m/3} {\rm log^{2/3}}(kt))$ if $\tau \textgreater{} 1$, and to $d \leq kmt$ if $\tau = d^{2d}$.

This investigation has a complexity-theoretic motivation: we show that a suitable strengthening of the above results would imply a separation of the algebraic complexity classes VP and VNP. As they currently stand, these results are strong enough to provide a new example of a family of polynomials in VNP which cannot be computed by monotone arithmetic circuits of polynomial size.

March 27, 2015 12:40 AM

StackOverflow

How to change settings of an index after creation in elastic4s?

I'd need to disable index refreshing for the course of a batch indexing (of gigabytes) and set it back again after it's done. But from the source code of elastic4s I can't find a way to do it other than at index creation time... Is it possible? Or is there a workaround for this?

In java client :

client
  .admin
  .indices()
  .prepareUpdateSettings()
  .setSettings(settings)
  .setIndices(indexName)
  .execute()
  .actionGet()

Natively :

curl -XPUT 'localhost:9200/my_index/_settings' -d '
{
    "index" : {
        "refresh_interval" : -1
    }
}
'

by lisak at March 27, 2015 12:38 AM

TheoryOverflow

How to measure a probability that two data objects represent the same entity? [on hold]

What algorithmic ways exist to assign a probability for two data objects to represent the same entity?

For instance, identifying misspelled person names, using ambiguous initials, when we may also have persons dates of birth, addresses, (that might be inaccurate or missing, however).

Specific requirements:

  • The number of objects is a couple of million (persons).
  • The object structure is known but individual properties can be missing or inaccurate (name, address, date of birth).

by Dmitri Zaitsev at March 27, 2015 12:20 AM

CompsciOverflow

Sequential vs Parallel Source Code [on hold]

I am a student of computer engineering.

For my course of study I need to find code written in both serial and parallel (MPI), to write a report where I compared the efficiency.

Browsing the Internet, I found how difficult it is to find code working and reliable.

I would be most grateful if you could share with me some parallel and sequential source code version for my studies.

by lovemint at March 27, 2015 12:17 AM

StackOverflow

Scala & Finatra: send file server response directly from disk to network w/o loading into memory

I was tasked at work to send our clients a file via finatra, directly from disk without loading into memory(these are very large files). Here are my questions:

0) How do I interact with the disk i/o without ever loading the information into memory?

1) When connecting a file inputstream to an http outputstream, does that actually load memory into ram?

2) I thought everything has to be loaded into memory to work with, transport, and send. How can one actually send contents directly to a network port w/o being loaded into memory?

3) Would the flow of memory be from the disk, to the cpu registers, onto network adapters buffer for it to be sent? How do I ensure that this is the flow without loading ram?

4) Is it possible to do this in Finatra

by Mr.Student at March 27, 2015 12:09 AM

HN Daily

March 26, 2015

CompsciOverflow

How to measure a probability that two data objects represent the same entity [on hold]

What algorithmic ways exist to assign a probability for two data objects to represent the same entity?

For instance, identifying misspelled person names, using ambiguous initials, when we may also have persons dates of birth, addresses, (that might be inaccurate or missing, however).

Specific requirements:

  • The number of objects is a couple of million (persons).
  • The object structure is known but individual properties can be missing or inaccurate (name, address, date of birth).

by Dmitri Zaitsev at March 26, 2015 11:55 PM

Planet Emacsen

StackOverflow

How do I transform existing log statements into interpolated string format?

I want to change existing statements such as

log.debug("Value of foo : {}", foo)

to use interpolated string format instead

log.debug(s"Value of foo : $foo")

Although IntelliJ has a blog post on converting between string formats, it seems to require having to first convert the existing string to

"Value of foo ".format(foo)

which defeats the purpose of wanting an automatic format change option in the first place (and doesn't work properly for me anyway).

How can I easily format these log statements to use string interpolation? Is there any way to do so wholesale (across an entire project)?

by Alok at March 26, 2015 11:48 PM

How to put Selected value in @select helper

I have this code:

@select( insurancerForm("pi"),packages.map(s => s.getId.toString -> s.getName), args = '_label -> "")

and i want to put an already defined value in select.

Thank in advanced

by Nick at March 26, 2015 11:39 PM

scala play action composition response header

I am using action composition for api authorization by reading request headers. I want to inject authentication token on response header so client can use it to call API on consecutive calls. So far, I have intercepted the request using action composition, can I set response header before I get to the controller code or only can be done on controller code ?

Can response header be injected in below invokeBlock ?

def invokeBlock[A](request: Request[A], block: AuthorizedRequest[A] => Future[Result]): Future[Result] = { 
    val requestToken = sessionTokenPair(request)
    requestToken match {
      case Some(token) => { 
        AuthenticationManager.validateAPIToken(token).map { sh =>  
        block(new AuthorizedRequest(sh, request))
        }.getOrElse{
          Future.successful(Forbidden(Json.toJson(
            Error(status = Status.FORBIDDEN,errorCode = 43, message = "Bad Request",developerMessage="Issue")
           )))
        }
      }
      case _ => {
        Future.successful(Forbidden(Json.toJson(
            Error(status = Status.FORBIDDEN,errorCode = 43, message = "Bad Request",developerMessage="Issue")
           )))
      }}
    }
}`enter code here`

by Dipendra Adhikari at March 26, 2015 11:38 PM

When should I use function vs method?

Lets say I have some inputs and I'm generating some output. I don't need to maintain state.

Should I use a function or should I create a class that will have one method that would look exactly like that function?

What are the advantages of one over the other? (besides unit testing which is easier with the object)

by Rodrigo Ruiz at March 26, 2015 11:26 PM

DragonFly BSD Digest

Clustering and copies in HAMMER2

Matthew Dillon answered some mailing list questions on how clustering and data copies will work in HAMMER2 – no due date, of course, because this is very complex.  If you’re really into it, there’s always watching the recent commits.

by Justin Sherrill at March 26, 2015 11:26 PM

BSDNow 082: SSL in the wild

BSDNow 082 is up, talking with Bernard Spil about LibreSSL adoption in FreeBSD ports.  There’s lots of other material listed – see the BSDTalk page for a summary of all the topics covered.

by Justin Sherrill at March 26, 2015 11:22 PM

QuantOverflow

Effect of vol smile on risk neutral probability of ITM

I was asked in an interview about how the vol smile affect the price of a binary option, which is essentially the Prob(ITM) under risk neutral measure. My thought is that the implied vol at spot which makes the option OTM is high, that means the prob(ITM) at that region is higher and so the price of binary option. Please correct me or give a more rigorous extension to my thoughts.

by user3354000 at March 26, 2015 11:07 PM

Time-independent local volatility

Suppose somebody provides us with a surface of European call prices $C(\tau,K)$ where $\tau$ stands for time-to-maturity and $K$ for the strike. By Dupire's results, there is a unique local volatility function $\sigma(\tau,K)$ that generates these prices, and it can be expressed from them as $$ \sigma(\tau,K) = \frac{2C_\tau}{K^2C_{KK}}, $$ here for simplicity I am assuming that interest rate is zero. Now, if we just have $C(T,K)$ for a single maturity $\tau = T$, is that true that there exists a unique time-independent local volatility $\sigma(K)$ that generates this price at that maturity? In case it does, is there an analytic formula for that function?

by Ulysses at March 26, 2015 11:03 PM

StackOverflow

How to hotplug pci/e devices in freeBSD? (Or How to remove and rescan/re-enumerate pci device?)

I'm looking for a way to refresh/re-enumerate the pci device list.

In Linux, you can remove a particular pci device, and then after preforming a "rescan" the device will appear again. In Linux it is done by:

echo 1 > /sys/bus/pci/devices/.../remove 
echo 1 > /sys/bus/pci/rescan 

I'm looking for a similar functionality in freeBSD.

What do I want to achieve?

I'm using freeBSD and my pcie device can be reset from the host. But when it boots again, it's uncommunicative, so I want to rescan the pci devices in order to initiate a new connection between the host and the device.

Any idea would be appreciated, even if it takes some coding effort.

Thanks!

by Era paz at March 26, 2015 11:01 PM

DataTau

Planet Clojure

Taming those pesky datetimes in a clojure stack

Have you ever faced frustrating issues when using dates in your clojure stack ? If I mention java.util.Date, java.sql.Date/java.sql.Timestamp clj-time, json/ISO-8601 and UTC/Timezones, does your bloodpressure rise slightly ?

This is the blog post I wished I had several weeks back to save me from some of the date pains my current project has been through.

Introduction

A little while back date handling started to become a nightmare in my current project. We have a stack with a ClojureScript frontend, a clojure WebApp and a couple of clojure microservices apps using Oracle as a data store.

We decided pretty early on to use clj-time. It’s a really quite nice wrapper on top of joda-time. But we didn’t pay much attention to how dates should be read/written to Oracle or how we should transmit dates across process boundaries. Timezones is another issue we didn’t worry to much about either.

You will probably not regret using an UTC timezone for your Servers and Database. This post puts it succinctly. Your webclient(s) though is out of your control !

I’m sure some of the measures we have taken can be solved more elegantly, but hopefully you might find some of them useful.

Reading from/writing to the database

We use clojure/java.jdbc for our database integration. Here’s how we managed to simplify reading and writing dates/datetimes.

(ns acme.helpers.db
  (:import [java.sql PreparedStatement])
  (:require [acme.util.date :as du]
            [clj-time.coerce :as c]
            [clojure.java.jdbc :as jdbc]))


(extend-protocol jdbc/IResultSetReadColumn                                (1)
  java.sql.Date
  (result-set-read-column [v _ _] (c/from-sql-date v))                    (2)

  java.sql.Timestamp
  (result-set-read-column [v _ _] (c/from-sql-time v)))

(extend-type org.joda.time.DateTime                                       (3)
  jdbc/ISQLParameter
  (set-parameter [v ^PreparedStatement stmt idx]
    (.setTimestamp stmt idx (c/to-sql-time v))))                          (4)
1 We extend the protocol for reading objects from the java.sql.ResultSet. In our case we chose to treat java.sql.Date and java.sql.Timestamp in the same manner
2 clj-time provides some nifty coercion functions including the facility to coerce from sql dates/times to DateTime
3 We extend the DateTime class (which is final btw!) with the ISQLParameter protocol. This is a protocol for setting SQL parameters in statement objects.
4 We explicitly call setTimestamp on the prepared statement with a DateTime coerced to a java.sqlTimestamp as our value

Now we can interact with oracle without being bothered with java.sql.Date and java.sql.Timestamp malarkey.

It’s vital that you require the namespace you have the above incantations, before doing any db interactions. Might be evident, but it’s worth emphasizing.
Clojure protocols are pretty powerful stuff. It’s deffo on my list of clojure things I need to dig deeper into.

Dates across process boundaries

Our services unsurpringly uses JSON as the data exchange format. I suppose the defacto standard date format is ISO-8601, it makes sence to use that. It so happens this is the standard format for DateTime when you stringify it.

You might want to look into transit. It would probably have been very useful for us :)

Outbound dates

(ns acme.core
  (:require [clojure.data.json :as json]
            [clj-time.coerce :as c]))


(extend-type org.joda.time.DateTime           (1)
  json/JSONWriter
  (-write [date out]
    (json/-write (c/to-string date) out)))    (2)
1 Another extend of DateTime, this time with the JSONWriter protocol.
2 When serializing DateTime to json we coerce it to string. clj-time.coerce luckily uses the ISO-8601 format as default

Inbound dates

(ns acme.util.date
  (:require [clj-time.core :as t]
            [clj-time.format :as f]
            [clj-time.coerce :as c]))


(def iso-date-pattern (re-pattern "^\\d{4}-\\d{2}-\\d{2}.*"))


(defn date? [date-str]                                                         (1)
  (when (and date-str (string? date-str))
    (re-matches iso-date-pattern date-str)))


(defn json->datetime [json-str]
  (when (date? json-str)
    (if-let [res (c/from-string json-str)]                                     (2)
      res
      nil))) ;; you should probably throw an exception or something here !

(defn datetimeify [m]
  (let [f (fn [[k v]]
            (if (date? v)
              [k (json->datetime v)]                                           (3)
              [k v]))]
    (clojure.walk/postwalk (fn [x] (if (map? x) (into {} (map f x)) x)) m)))
1 A crude helper function to check if a given value is a date. There is a lot that passes through as valid ISO-8601 we settled for atleast a minimum of YYYY-MM-DD
2 Coerces a string to a DateTime, the coercion will return nil if it can’t be coerced, that’s probably worth an exception
3 Traverse a arbitrary nested map and coerce values that (most likely) are dates
Hook up middleware

(defn wrap-date [handler]                                     (1)
  (fn [req]
    (handler (update-in req [:params] (datetimeify %)))))


def app (-> routes
            auth/wrap-auth
            wrap-date                                         (2)
            wrap-keyword-params
            wrap-json-params
            wrap-datasource
            wrap-params
            wrap-config))
1 Middleware that calls our helper function to coerce dates with the request map as input
2 Hook up the middleware

Handling dates in the webclient

We have a ClojureScript based client so it made sense for us to use cljs-time. It’s very much inspired by clj-time, but there are some differences. The most obvious one is that there is no jodatime, so Google Closure goog.date is used behind the scenes.

So how do we convert to and from the iSO-8601 string based format in our client ?

Surprisingly similar to how we do it on the server side as it happens !

;; require similar to the ones on the server side. cljs-time. rather than clj-time.


(defn datetimes->json [m]                                                       (1)
  (let [f (fn [[k v]]
            (if (instance? goog.date.Date v)                                    (2)
              [k (c/to-string v)]
              [k v]))]
    (clojure.walk/postwalk (fn [x] (if (map? x) (into {} (map f x)) x)) m)))


;; AJAX/HTTP Utils

(defn resp->view [resp]                                                         (3)
  (-> resp
      (update-in [:headers] #(keywordize-keys %))
      (assoc-in [:body] (-> resp datetimeify :body))))                          (4)

(defn view->req [params]                                                        (5)
  (-> params
      datetimes->json))                                                         (6)
1 Function that traverses a nested map and converts from DateTime to ISO-8601
2 Almost an instanceOf check to decide if the value is eligible for coercion
3 Handy function to transform an ajax response to something appropriate for use in our client side logic
4 datetimeify is identical to our server side impl
5 Handy function to take a map, typically request params, and transform to something appropriate for communication with a backend server. If you are using something like cljs-http it might be appropriate to hook it in as a middleware.
6 Coerce any DateTime values to ISO-8601 date strings
What about timezones on the client ? The default for the datetime constructor in cljs-time is to use UTC. So when displaying time and/or accepting date with time input from the client you need to convert to/from the appropriate timezone.
(ns acme.client
  (:require [cljs-time.format :as f]
            [cljs-time.core :as t]))


(def sample (t/now)) ;; lets say 2015-03-27T00:53:38.950Z


(->> sample
     t/to-default-time-zone                          ; UTC+1 for me
     (f/unparse (f/formatter "dd.MM.yyyy hh:mm")))   ; => 27.03.2015 01:53

Summary

Using clojure protocols we managed to simplify reading and writing date(times) to the database. Protocols also helped us serialize date(times) to json. For reading json we had to hack it a little bit. By using fairly similar libs for dates on both the client and our server apps we managed to reuse quite a bit. In addition We have reasonable control of where we need to compensate for timezones. Most importantly though, our server-side and client-side logic can work consistently with a sensible and powerful date implementation.

by Magnus Rundberget at March 26, 2015 11:00 PM

StackOverflow

Prevent Scala from parsing XML

I would like to define a function with this symbolic name without using backticks :

def <? (i: Int): Unit = println(i)

Unfortunately, this results in the following error identifier expected but $XMLSTART$< found. Is there a way to prevent Scala from parsing this symbolic name as XML ?

Thanks !

by rhartert at March 26, 2015 10:42 PM

How do I create a Spark RDD from Accumulo 1.6 in spark-notebook?

I have a Vagrant image with Spark Notebook, Spark, Accumulo 1.6, and Hadoop all running. From notebook, I can manually create a Scanner and pull test data from a table I created using one of the Accumulo examples:

val instanceNameS = "accumulo"
val zooServersS = "localhost:2181"
val instance: Instance = new ZooKeeperInstance(instanceNameS, zooServersS)
val connector: Connector = instance.getConnector( "root", new PasswordToken("password"))
val auths = new Authorizations("exampleVis")
val scanner = connector.createScanner("batchtest1", auths)

scanner.setRange(new Range("row_0000000000", "row_0000000010"))

for(entry: Entry[Key, Value] <- scanner) {
  println(entry.getKey + " is " + entry.getValue)
}

will give the first ten rows of table data.

When I try to create the RDD thusly:

val rdd2 = 
  sparkContext.newAPIHadoopRDD (
    new Configuration(), 
    classOf[org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat], 
    classOf[org.apache.accumulo.core.data.Key], 
    classOf[org.apache.accumulo.core.data.Value]
  )

I get an RDD returned to me that I can't do much with due to the following error:

java.io.IOException: Input info has not been set. at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validateOptions(InputConfigurator.java:630) at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:343) at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:538) at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:98) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:222) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:220) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:220) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1367) at org.apache.spark.rdd.RDD.count(RDD.scala:927)

This totally makes sense in light of the fact that I haven't specified any parameters as to which table to connect with, what the auths are, etc.

So my question is: What do I need to do from here to get those first ten rows of table data into my RDD?

update one Still doesn't work, but I did discover a few things. Turns out there are two nearly identical packages,

org.apache.accumulo.core.client.mapreduce

&

org.apache.accumulo.core.client.mapred

both have nearly identical members, except for the fact that some of the method signatures are different. not sure why both exist as there's no deprecation notice that I could see. I attempted to implement Sietse's answer with no joy. Below is what I did, and the responses:

import org.apache.hadoop.mapred.JobConf
import org.apache.hadoop.conf.Configuration
val jobConf = new JobConf(new Configuration)

import org.apache.hadoop.mapred.JobConf import org.apache.hadoop.conf.Configuration jobConf: org.apache.hadoop.mapred.JobConf = Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml

Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml

AbstractInputFormat.setConnectorInfo(jobConf, 
                                     "root", 
                                     new PasswordToken("password")

AbstractInputFormat.setScanAuthorizations(jobConf, auths)

AbstractInputFormat.setZooKeeperInstance(jobConf, new ClientConfiguration)

val rdd2 = 
  sparkContext.hadoopRDD (
    jobConf, 
    classOf[org.apache.accumulo.core.client.mapred.AccumuloInputFormat], 
    classOf[org.apache.accumulo.core.data.Key], 
    classOf[org.apache.accumulo.core.data.Value], 
    1
  )

rdd2: org.apache.spark.rdd.RDD[(org.apache.accumulo.core.data.Key, org.apache.accumulo.core.data.Value)] = HadoopRDD[1] at hadoopRDD at :62

rdd2.first

java.io.IOException: Input info has not been set. at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validateOptions(InputConfigurator.java:630) at org.apache.accumulo.core.client.mapred.AbstractInputFormat.validateOptions(AbstractInputFormat.java:308) at org.apache.accumulo.core.client.mapred.AbstractInputFormat.getSplits(AbstractInputFormat.java:505) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:201) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:222) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:220) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:220) at org.apache.spark.rdd.RDD.take(RDD.scala:1077) at org.apache.spark.rdd.RDD.first(RDD.scala:1110) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:64) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:69) at...

* edit 2 *

re: Holden's answer - still no joy:

    AbstractInputFormat.setConnectorInfo(jobConf, 
                                         "root", 
                                         new PasswordToken("password")
    AbstractInputFormat.setScanAuthorizations(jobConf, auths)
    AbstractInputFormat.setZooKeeperInstance(jobConf, new ClientConfiguration)
    InputFormatBase.setInputTableName(jobConf, "batchtest1")
    val rddX = sparkContext.newAPIHadoopRDD(
      jobConf, 
      classOf[org.apache.accumulo.core.client.mapreduce.AccumuloInputFormat], 
      classOf[org.apache.accumulo.core.data.Key], 
      classOf[org.apache.accumulo.core.data.Value]
      )

rddX: org.apache.spark.rdd.RDD[(org.apache.accumulo.core.data.Key, org.apache.accumulo.core.data.Value)] = NewHadoopRDD[0] at newAPIHadoopRDD at :58

Out[15]: NewHadoopRDD[0] at newAPIHadoopRDD at :58

rddX.first

java.io.IOException: Input info has not been set. at org.apache.accumulo.core.client.mapreduce.lib.impl.InputConfigurator.validateOptions(InputConfigurator.java:630) at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.validateOptions(AbstractInputFormat.java:343) at org.apache.accumulo.core.client.mapreduce.AbstractInputFormat.getSplits(AbstractInputFormat.java:538) at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:98) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:222) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:220) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:220) at org.apache.spark.rdd.RDD.take(RDD.scala:1077) at org.apache.spark.rdd.RDD.first(RDD.scala:1110) at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:61) at

edit 3 -- progress!

i was able to figure out why the 'input INFO not set' error was occurring. the eagle-eyed among you will no doubt see the following code is missing a closing '('

AbstractInputFormat.setConnectorInfo(jobConf, "root", new PasswordToken("password") 

as I'm doing this in spark-notebook, I'd been clicking the execute button and moving on because I wasn't seeing an error. what I forgot was that notebook is going to do what spark-shell will do when you leave off a closing ')' -- it will wait forever for you to add it. so the error was the result of the 'setConnectorInfo' method never getting executed.

unfortunately, I'm still unable to shove the accumulo table data into an RDD that's useable to me. when I execute

rddX.count

I get back

res15: Long = 10000

which is the correct response - there are 10,000 rows of data in the table I pointed to. however, when I try to grab the first element of data thusly:

rddX.first

I get the following error:

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0 in stage 0.0 (TID 0) had a not serializable result: org.apache.accumulo.core.data.Key

any thoughts on where to go from here?

edit 4 -- success!

the accepted answer + comments are 90% of the way there - except for the fact that the accumulo key/value need to be cast into something serializable. i got this working by invoking the .toString() method on both. i'll try to post something soon that's complete working code incase anyone else runs into the same issue.

by David Daedalus at March 26, 2015 10:39 PM

Format partial functions like match functions in Scala with IntelliJ

IntelliJ (14.0.3, Scala plugin 1.4) formats regular case/match blocks after a function as so (this is from some HTTP code):

get("/work") { x => x match {
  case (200, result) => ...
  case _ => ...
}
             } // I'm not worried about this brace

If I collapse that obvious x => x match, it formats it like this:

get("/work") {
               case (200, result) => ...
               case _ => ...
             }

That kind of formatting gets messy if the call to get("/work") uses a longer url (as I have in places). It gets even worse if I have further nested things.

Is there any way to make IntelliJ format the case statements to be indented by one tabstop relative to the original statement, instead of relative to the opening brace?

by ThorinII at March 26, 2015 10:37 PM

Replacing workers in BalancingPool

I'm using akka BalancingPool to distribute tasks over workers. It works pretty well until I add/remove workers in pool. I wanna do because some of workers are unreliable and bad performing. However, balancing pool send all messages only to one worker after replacement.

Here is a scala test for this

import scala.concurrent.duration._
import org.scalatest._
import akka.util.Timeout
import akka.actor._
import akka.routing._ 
import akka.testkit._

class BalancingPoolSpec extends TestKit(ActorSystem("BalancingPoolSpec")) with ImplicitSender
  with WordSpecLike with Matchers with BeforeAndAfterAll {

  override def afterAll {
    TestKit.shutdownActorSystem(system)
  }

  val numberOfTestMessages = 5
  val numberOfWorkers = 3
  val pool = system.actorOf(BalancingPool(numberOfWorkers).props(Props[Worker]), "pool")

  def sendMessagesAndCollectStatistic = {
    for (i <- 1 to numberOfTestMessages) pool ! "task"
    (currentRoutes, collectResponces)
  }

  def collectResponces = receiveN(numberOfTestMessages, 10.second).groupBy(l => l).map(t => (t._1, t._2.length))

  def currentRoutes = {
    pool ! GetRoutees
    val Routees(routees) = expectMsgAnyClassOf(classOf[Routees])
    routees
  }

  def replaceWorkers(oldRoutees: Seq[Routee]) = {
    //Adding new Routees before removing old ones to make it work :)
    for (i <- 1 to numberOfWorkers) pool ! AddRoutee(ActorRefRoutee(system.actorOf(Props[Worker])))
    for (r <- oldRoutees) pool ! RemoveRoutee(r)
    Thread.sleep(500) //Give some time to BalancingPool
  }

  "test" in {
    val (routees1, responces1) = sendMessagesAndCollectStatistic
    replaceWorkers(routees1)
    val (routees2, responces2) = sendMessagesAndCollectStatistic

    assert(responces2.size > 1 , s"""
      Before replacement distribution over ${routees1.size} workers: ${responces1}
      After replacement distribution over ${routees2.size} workers: ${responces2}""")
  } 
}


//For each task worker simulate some work for 1 second and sends back to sender worker's id
object Worker {
  var i = 0
  def newId = synchronized {
    i += 1
    i  
  } 
}

class Worker extends Actor {
  val id = Worker.newId
  def receive = {
    case _ => Thread.sleep(1000); sender ! id
  }
}

Failing message

1 was not greater than 1
     Before replacement distribution over 3 workers: Map(2 -> 2, 1 -> 1, 3 -> 2)
     After replacement distribution over 3 workers: Map(4 -> 5)

So, before replacement tasks was distributed over 3 workers, after all 5 tasks went to one worker. Does BalancingPool suppose to handle AddRoutee/RemoveRoutee messages in expected way?

by Stas Kurilin at March 26, 2015 10:36 PM

Lobsters

/r/scala

Visual Unseen - system for Functional & OO & Logical programming

I think you guys might be interested in this.

I am developing a graphical system for functional and OO and Logical programming, that is easy to use and understand. It might be fast too.

The system was inspired by Scala, but I created a new language to cope with the dynamic part and the generics.

"Unseen" is in honor of the humorous Discworld novels. And to un-C the old way of programming.

These are the basics:

Functions are like blocks
Typing is optional but can be added/ enforced.

Functions can be used like objects
And objects are components that contain functions.

With proto-types you can define types of the functions.
And they enable testing per function.

With the same system we can do logical programming too.

And this is just the surface.

While not completely Scala, I think you can get some new ideas for future developments of Scala.

For more info see subreddit:
/r/unseen_programming
It has some funny stories too!!!!

At least the system seems to be simple enough to be used by children, just like the old Smalltalk system was.

submitted by zyxzevn
[link] [6 comments]

March 26, 2015 10:22 PM

StackOverflow

scala.MatchError: java.lang.StackOverflowError (of class java.lang.StackOverflowError)

I had a project that was developed using play scala 2.0 and it was working fine and i had a need to upgrade the version to 2.3.8 . So i migrated my application version by following this link https://www.playframework.com/documentation/2.3.x/Migration23 and i am able to run the code in newer version in my machine where i have 8 GB RAM and jdk 1.7.0_25 but when i run the code from some other machines with 4 GB RAM it throws the following error enter image description here

Even it is breaking in some systems with 8 GB and jdk 1.8 i am getting confused whether the issue is due to jdk or memory or an issue in play 2.3.8 Can somebody help me in getting this issue resolved

Attached link to my complete stacktrace

Thanks in advance

by Karthik at March 26, 2015 10:02 PM

Fefe

Ich frage mich ja, woher die Staatsanwaltschaft eigentlich ...

Ich frage mich ja, woher die Staatsanwaltschaft eigentlich wissen will, dass das ein Selbstmord war?

Vielleicht hat der ja nen epileptischen Anfall gehabt, oder ein Aneurysma. Herzinfarkt? Schlaganfall? Vielleicht hat der die Tür nur verriegelt, weil das so in irgendwelchen bescheuerten Terrorpanikregeln stand?

Kann man das bei so einem pulverisierten Krater überhaupt feststellen, ob die Tür normal oder antiterrorverriegelt war?

Update: Die Antwort kam prompt per Mail:

Es handelte sich um einen "Absturz" mit sehr konstanter Sinkrate - da war der Autopilot schon fast zwangsläufig im Spiel. Zumeist fliegt der Autopilot im LNAV-Modus (lateral navigation), und der jeweilig zu fliegende Kurs bezieht der Autopilot aus dem bereits abgespeicherten Routenplan. Die Flughöhe wird von der Flugsicherung vorgegeben, und wird mit einem simplen Drehknopf separat eingegeben. Um die Flughöhe auf 0 zu setzen, muss man einen Drehknopf von 38000 Fuss auf 0 stellen - soviel ich weiss in 1000er-Schritten (und 1000 Fuss sind ja rund 330 Meter).

Jemand mit Schlaganfall müsste also diesen Drehknopf drehen, drehen, drehen... ein derartiger drastischer Fehler bei der Flughöhen-Einstellung beim Autopiloten kann fast gar nicht anders erklärt werden als durch Absicht.

March 26, 2015 10:01 PM

Hat eigentlich der Uhl schon die Vorratsdatenspeicherung ...

Hat eigentlich der Uhl schon die Vorratsdatenspeicherung gefordert?

March 26, 2015 10:01 PM

Hat der Pilot eigentlich Killerspiele gespielt?

Hat der Pilot eigentlich Killerspiele gespielt?

March 26, 2015 10:01 PM

Neues aus Argentinien: Gaby Weber berichtet aus dem ...

Neues aus Argentinien: Gaby Weber berichtet aus dem Umfeld des Falles um den erschossenen Staatsanwalt neulich. Ich finde das ja großartig, was die Gaby immer alles rauskriegt, indem sie einfach nur hingeht und ein paar Fragen stellt.
»Ich bin kein Spion«, so Bogado über seine Rolle in der Affäre, »nur ein Informant.« Also einer, der »auf der Straße die Ohren aufhält, um den Terrorismus zu bekämpfen«. Sein Auftrag war, das Vertrauen des Chefs der iranischen Gemeinde zu gewinnen.
:-)

Update: Wenn ihr sonst nichts lest heute, lest diesen Artikel. Zum Totlachen!

March 26, 2015 10:01 PM

StackOverflow

Deploying application via Ansible playbook without bringing both sides down

I'm using Ansible to deploy a Java web application. Deployment is quiet basic via Ansible running a Jenkins playbook, copying a jar-file to 2 separate application servers, called node-01a and node-01b, both behind an Amazon AWS load balancer.

Currently the deployment happens on both the node-01a and node-01b at the same time. What would be the easiest way to do this without both nodes going down at the same time?

by ujjain at March 26, 2015 09:47 PM

Lobsters

StackOverflow

Is there a way to test System/exit in clojure?

I have code that runs (System/exit 0) and I want to test that portion of the code. I tried testing it with with-redefs but I found out I'm not allowed to do that for Java methods. How can I go about testing this?

by Tony Baik at March 26, 2015 09:37 PM

AWS

AWS Management Console Update – Tag Substring Search

Many AWS customers use tags (key/value pairs) to organize their AWS resources. A recent Reddit thread (Share with us your AWS Tagging standards) provides a nice glimpse into some popular tagging strategies and practices.

Late last year we launched Resource Groups and Tag Editor. We gave you the ability to use Resource Groups to create, maintain, and view collections of AWS resources that share common tags. We also gave you the Tag Editor to simplify and streamline the process of finding and tagging AWS resources.

Today we are enhancing the tag search model that you use to create Resource Groups and to edit tags with the addition of substring search. If you encode multiple pieces of information within a single value,  this feature can be very helpful. For example, you can locate resources that are tagged according to a pattern of the form “SystemDB-Dev-01-jeff” by searching for “Dev” like this:

The Tag Editor now allows you to use “deep links” that allow you to find a particular set of resources by clicking on a specially constructed link. Here are a couple of examples:

You can perform a similar substring search when you are creating a Resource Group:

Again, you can use deep links to find resources. Here’s an example:

This feature is available now and you can start using it today.

Jeff;

by Jeff Barr at March 26, 2015 09:29 PM

TheoryOverflow

Rate of convergence for the Perron–Frobenius theorem

The Perron–Frobenius Theorem states the following.

Let $A = (a_{ij})$ be an $n \times n$ irreducible, non-negative matrix ($a_{ij} \geq 0, \forall i,j: 1\leq i,j \leq n$). Then the following statements are true.

  • $A$ has a real eigenvalue $c \geq 0$ such that $c > |c'|$ for all other eigenvalues $c'$.
  • There is an eigenvector $v$ with non-negative real components corresponding to the largest eigenvalue $c: Av = cv, v_i \ge 0, 1 \leq i \leq n$, and $v$ is unique up to multiplication by a constant.
  • If the largest eigenvalue $c$ is equal to $1$, then for any starting vector $x^{\langle 0\rangle} \neq 0$ with non-negative components, the sequence of vectors $A^k x^{\langle 0\rangle}$ converge to a vector in the direction of $v$ as $k \rightarrow \infty$.

But the theorem does not say how fast the sequence of vectors $A^k x^{\langle 0\rangle}$ will converge. Are there any known results on the rate of convergence? What are some good, polynomial-time algorithms to compute this limiting vector?

by Arindam Pal at March 26, 2015 09:22 PM

/r/netsec

The bizarre origin story of ransomware: A Pynchon-esque tangle of AIDS, floppy disks, and Panamanian PO Boxes

March 26, 2015 09:15 PM

StackOverflow

Unix C - Portable WEXITSTATUS

I'm trying to get the exit code of a subprocess. On Linux and FreeBSD I can go like so:

[0] [ishpeck@kiyoshi /tmp]$ uname
FreeBSD
[0] [ishpeck@kiyoshi /tmp]$ cat tinker.c 
#include <stdio.h>
#include <sys/wait.h>

int main(void)
{
    FILE *proc = popen("ls", "r");
    printf("Exit code: %d\n", WEXITSTATUS(pclose(proc)));
    return 0;
}
[0] [ishpeck@kiyoshi /tmp]$ gcc tinker.c -o tinker
[0] [ishpeck@kiyoshi /tmp]$ ./tinker
Exit code: 0
[0] [ishpeck@kiyoshi /tmp]$ grep WEXITSTATUS /usr/include/sys/wait.h 
#define WEXITSTATUS(x)  (_W_INT(x) >> 8)

However, on OpenBSD, I get complaints from the compiler...

[0] [ishpeck@ishberk-00 /tmp]$ uname   
OpenBSD
[0] [ishpeck@ishberk-00 /tmp]$ cat tinker.c                                    
#include <stdio.h>
#include <sys/wait.h>

int main(void)
{
    FILE *proc = popen("ls", "r");
    printf("Exit code: %d\n", WEXITSTATUS(pclose(proc)));
    return 0;
}
[0] [ishpeck@ishberk-00 /tmp]$ gcc tinker.c -o tinker                          
tinker.c: In function 'main':
tinker.c:7: error: lvalue required as unary '&' operand
[1] [ishpeck@ishberk-00 /tmp]$ grep WEXITSTATUS /usr/include/sys/wait.h        
#define WEXITSTATUS(x)  (int)(((unsigned)_W_INT(x) >> 8) & 0xff)

I don't really care how it's done, I just need the exit code.

This leads me to believe that I would also have this problem on Mac: http://web.archiveorange.com/archive/v/8XiUWJBLMIKYSCRJnZK5#F4GgyRGRAgSCEG1

Is there a more portable way to use the WEXITSTATUS macro? Or is there a more portable alternative?

by Ishpeck at March 26, 2015 09:09 PM

Fefe

Old and busted: Geheimdienste bespitzeln Linkspartei-Abgeordnete.New ...

Old and busted: Geheimdienste bespitzeln Linkspartei-Abgeordnete.

New hotness: Scotland Yard bespitzelte Labour-Abgeordnete. Die Linkspartei ist ja wenigstens noch so ein bisschen kommunistisch, aber Labour?! Und wir reden hier nicht von irgendwelchen etwaigen Altkommunisten, sondern von Leuten wie Jack Straw, der später Innenminister (!) wurde. Das ist kein Antikapitalist, im Gegenteil!

March 26, 2015 09:01 PM

Gute Nachrichten:Nach dem Bekanntwerden des NSU wurden ...

Gute Nachrichten:
Nach dem Bekanntwerden des NSU wurden mehrere Ermittlungsverfahren gegen mutmaßliche Rechtsterroristen eröffnet. Die Regierung teilte nun mit, es habe sich kein Verdacht bestätigt.
Na da haben wir ja noch mal Glück gehabt! Gibt doch keine Rechtsterroristen in Deutschland.

March 26, 2015 09:01 PM

Woher wussten die Medien eigentlich, dass es sich bei ...

Woher wussten die Medien eigentlich, dass es sich bei dem Germanwings-Absturz nicht um einen Terroranschlag handelt?

Wäre das auch so gewesen, wenn der einen syrischen Namen gehabt hätte?

March 26, 2015 09:01 PM

Nachdem im ersten Akt Netanjahu dem Obama den Stinkefinger ...

Nachdem im ersten Akt Netanjahu dem Obama den Stinkefinger gezeigt hat mit der "uneingeladen vor den Republikanern im Congress eine fette Rede halten"-Nummer, revangiert sich Obama und lässt ein Dokument deklassifizieren, das das israelische Atomprogramm aufdeckt. Das Papier ist von 1987 und wurde vom Pentagon bezahlt. Hier ist das PDF von dem Bericht.

March 26, 2015 09:01 PM

CompsciOverflow

When can we assuredly say that a function is little o of some other function? [duplicate]

This question already has an answer here:

I'm trying to determine a function $f(x)$ that is $O(f)$ but not $o(f)$ and also not $\Omega(f)$. Note the $f$ used in the asymptotic notation is not the same as $f(x)$.

Originally I thought of $f(x)=\log(x), O(x)$ but I am not convinced that $o(x)$ is invalid for this function.

Previously I thought it was, because I could always come up with some constant $c$ that would bring the function $x$ below $f(x)$. However, I could say the same for $o(2^x)$ because surely there is some infinitesimally small constant that I can find that will put $2^x$ below $f(x)$ at a given value $x$. Any advice in this matter?

by drshmoo at March 26, 2015 09:00 PM

Possessive Kleene star operator

Has anyone studied the consequences of the Kleene star in regular expressions to always be "possessive"?

In other words, if * would always match as much as possible and no backtracking is allowed, would I still be able to express any regular language?

Let's say that @ is the possessive * operator.

There are cases where it doesn't matter: a*b and a@b both will match any string with 0 or more $a$ followed by a $b$.

However there are cases where being possessive is relevant: a.*b will match any string that starts with $a$ and ends with $b$ but a.@b will never match any string as the greedy @ will eat any character, including the ending $b$. The equivalent expression would be a([^b]@b)+.

One may be tempted to think that for any non-possesive expression there is an equivalent possessive expression and viceversa but I wasn't able to find any proof of this.

I'm limiting myself to * considering a+ equivalent to aa*.

As D.W. suggested in the comments below, I tried to start from the DFA. I didn't go much far, as this seems to have more to do with the way the non-determinism is handled, rather than with the structure of the automaton.

Could anyone point me in the right direction?

(It has pointed out to me that the right term to use would be "possessive" (http://www.regular-expressions.info/possessive.html) rather than "greedy". Thanks.)

by Remo.D at March 26, 2015 08:55 PM

StackOverflow

Scala: Iterate through list of ranges

I have a list of 3 elements. I want to create range from each of them and iterate through all possible combinations.

What I need to re-write to be able operate with different amount of elements in initial list:

val el = List(5, 4, 7)
(0 to el(0)).map { e0 => 
(0 to el(1)).map { e1 => 
(0 to el(2)).map { e2 => 
doSmth(List(e1,e2,e0))
}}}

It should be simple task. Just curious how to google it...

by mst at March 26, 2015 08:50 PM

What is the simplest way to setup up a SSH/SFTP repository for SBT?

I want to set up an internal repository for libraries to be used for a SBT project (which was initially based on ANT builds). Currently all external libraries are in a central lib/ directory. Do I also need to add metadata files to describe this libraries' dependencies? This is overkill - is there a way to configure the ssh (or sftp) resolver to require only the raw JAR files? Currently it seems not - when the resolver encounters a dependency it does not find in maven central, it does connect to the repository account, but then it just hangs. No error message, no nothing. What is missing here? And how can I find out the root cause of this bug? Logs would at least help.

by Wolfgang Liebich at March 26, 2015 08:46 PM

How to determine whether a type parameter is a subtype of a trait?

Let's say I have the following types

class Foo
trait Bar

Is there a way to make a method which takes in a Type parameter, T, and determine if that T is a Bar? For example,

def isBar[T <: Foo: Manifest] = 
  classOf[Bar].isAssignableFrom(manifest[T].erasure)

Sadly, isBar[Foo with Bar] is false because erasure seems to erase mixins.

Also, manifest[Foo with Bar] <:< manifest[Bar] is false

Is this possible at all?

I looked at this question: How to tell if a Scala reified type extends a certain parent class?

but that answer doesn't work with mixed-in traits as they seem to be erased as evidenced above.

by TwistedNoodle at March 26, 2015 08:41 PM

Lobsters

QuantOverflow

What are the different Credit Portfolio Management models and what are their advantages?

CreditMetrics, RiskMetrics(Algorithims), etc. are all different risk methodologies used by many banks. However, what are their advantages/disadvantages?

I would appreciate your replies!

by Kare at March 26, 2015 08:37 PM

StackOverflow

Scala covariance / contravariance question

Following on from this question, can someone explain the following in Scala:

class Slot[+T] (var some: T) { 
   //  DOES NOT COMPILE 
   //  "COVARIANT parameter in CONTRAVARIANT position"

}

I understand the distinction between T+ and T in the type declaration (it compiles if I use T). But then how does one actually write a class which is covariant in its type parameter without resorting to creating the thing unparametrized? How can I ensure that the following can only be created with an instance of T?

class Slot[+T] (var some: Object){    
  def get() = { some.asInstanceOf[T] }
}

EDIT - now got this down to the following:

abstract class _Slot[+T, V <: T] (var some: V) {
    def getT() = { some }
}

this is all good, but I now have two type parameters, where I only want one. I'll re-ask the question thus:

How can I write an immutable Slot class which is covariant in its type?

EDIT 2: Duh! I used var and not val. The following is what I wanted:

class Slot[+T] (val some: T) { 
}

by oxbow_lakes at March 26, 2015 08:36 PM

Scala: Boolean to Option

I have a Boolean and would like to avoid this pattern:

if (myBool) 
  Option(someResult) 
else 
  None

What I'd like to do is

myBool.toOption(someResult)

Any suggestions with a code example would be much appreciated.

by Chris Beach at March 26, 2015 08:20 PM

Slick 3.0 many-to-many query with the join as an iterable

I've created a many-to-many collection using Slick 3.0, but I'm struggling to retrieve data in the way I want.

There is a many-to-many relationship between Events and Interests. Here are my tables:

case class EventDao(title: String,
                    id: Option[Int] = None)


class EventsTable(tag: Tag)
  extends Table[EventDao](tag, "events") {

  def id = column[Int]("event_id", O.PrimaryKey, O.AutoInc)
  def title = column[String]("title")

  def * = (
    title,
    id.?) <> (EventDao.tupled, EventDao.unapply)

  def interests = EventInterestQueries.query.filter(_.eventId === id)
    .flatMap(_.interestFk)
}


object EventQueries {

  lazy val query = TableQuery[EventsTable]

  val findById = Compiled { k: Rep[Int] =>
    query.filter(_.id === k)
  }
}

Here's EventsInterests:

case class EventInterestDao(event: Int, interest: Int)


class EventsInterestsTable(tag: Tag)
  extends Table[EventInterestDao](tag, "events_interests") {

  def eventId = column[Int]("event_id")
  def interestId = column[Int]("interest_id")

  def * = (
    eventId,
    interestId) <> (EventInterestDao.tupled, EventInterestDao.unapply)

  def eventFk = foreignKey("event_fk", eventId, EventQueries.query)(e => e.id)
  def interestFk = foreignKey("interest_fk", interestId, InterestQueries.query)(i => i.id)
}


object EventInterestQueries {
  lazy val query = TableQuery[EventsInterestsTable]
}

And finally Interests:

case class InterestDao(name: String,
                       id: Option[Int] = None)

class InterestsTable(tag: Tag)
  extends Table[InterestDao](tag, "interests") {

  def id = column[Int]("interest_id", O.PrimaryKey, O.AutoInc)
  def name = column[String]("name")
  def name_idx = index("idx_name", name, unique = true)

  def * = (
    name,
    id.?) <> (InterestDao.tupled, InterestDao.unapply)

  def events = EventInterestQueries.query.filter(_.interestId === id)
    .flatMap(_.eventFk)
}


object InterestQueries {

  lazy val query = TableQuery[InterestsTable]

  val findById = Compiled { k: Rep[Int] =>
    query.filter(_.id === k)
  }
}

I can query and retrieve tuples of (event.name, interest) with the following:

val eventInterestQuery = for {
  event <- EventQueries.query
  interest <- event.interests
} yield (event.title, interest.name)

Await.result(db.run(eventInterestQuery.result).map(println), Duration.Inf)

So this is what I currently have.

What I want is to be able to populate a case class like:

case class EventDao(title: String,
                interests: Seq[InterestDao],
                id: Option[Int] = None)

The trouble is that if I update my case class like this, it messes up my def * projection in EventsTable. Also, I'll have to rename the EventsTable.interests filter to something like EventsTable.interestIds which is a bit ugly but I could live with if necessary.

Also, I can't find a way of writing a for query that yields (event.name, Seq(interest.name)). Anyway, that's just a stepping stone to me being able to yield a (EventDao, Seq(InterestDao)) tuple which is what I really want to return.

Does anyone know how I can achieve these things? I also want to be able to 'take' a certain number of Interests, so for some queries all would be returned, but for others only the first 3 would be.

by jbrown at March 26, 2015 08:19 PM

AWS

Preview the Latest Updates in the Master Branch – AWS SDK for Go

Following up on his recent guest post, my colleague Peter Moon has more news for Go developers!

Jeff;


 

Since our initial kickoff announcement in January, we have been revamping the internals of the AWS SDK for Go in the project’s ‘develop’ branch on GitHub, laying out a solid foundation for a well-tested, robustly generated SDK that meets the same high quality bar as our other official SDKs.

Today, with complete support for all AWS protocols and services, the develop branch has been merged to the master branch of the project. At this point the SDK’s architecture and interfaces include the initial set of key changes we have envisioned, and we’re excited to announce our progress and humbly invite customers to try out the SDK again.

While collecting and responding to your valuable feedback, we will also continue to work on additional improvements including various usability features and better documentation. We are immensely grateful for the amount of engagement and support we’ve been getting from the community and look forward to continue making AWS a better place for Go developers!

Peter Moon, Senior Product Manager

by Jeff Barr at March 26, 2015 08:07 PM

/r/compsci

QuantOverflow

ex ante tracking error correlation between funds

I have two portfolio's called Comb & Global. They both have the same investable universe lets says 3000 stocks & are measured against the same benchmark. So it is possible that both funds hold the same stocks. I would like to examine the correlation of the ex-ante between the two funds.

I know I can calculate the ex-ante tracking error as below,

te = sqrt((port_wgt - bm_wgt)' * cov_matrix * (port_wgt - bm_wgt))

I also know the correlation is calculated by

 p = cov(x,y) / stdev(x) * stdev(y)

I was wondering the best way to calculate the ex ante correlation between the two funds? Is there a relationship between the two funds weights that I can make use of?

Update

I should have mentioned that the two portfolios are sub portfolios and are combined into one portfolio. So I wanted to see the correlation of the ex ante tracking error between the two sub portfolio's.

I realised I can do the following,

port_wgts - number_of_companies x 2 matrix
cov_matrix - number_of_companies x number_of_companies matrix

so the below line will return a 2x2 covariance matrix.

port_wgts' * cov_matrix * prt_wgts

So I have the variances of both sub portfolios - taking the square root of this gives me the tracking error for both.

Convert the 2 X 2 covariance matrix to a correlation matrix by the following

  D = Diag(cov_matrix)^(1/2)
  corr_matrix = D^-1 * cov_matrix * D^-1

So I now have the correlation between the two sub portfolios just using the weights.

by mHelpMe at March 26, 2015 08:03 PM

Fefe

Während man in Deutschland mit "Email made in Germany" ...

Während man in Deutschland mit "Email made in Germany" und "De-Mail" herumschlangenölt, hat sich in Österreich das Problem offenbar noch gar nicht herumgesprochen. Krypto? Wofür braucht man das?

Neulich forwardete mir jemand eine ähnliche Sache, nur hat dort der ISP ernsthaft argumentiert, gegen die NSA könne man ja eh nichts machen, daher lohnt sich Krypto nicht.

March 26, 2015 08:01 PM

Ich muss hier mal kurz eine Mail durchreichen.Sehr ...

Ich muss hier mal kurz eine Mail durchreichen.
Sehr geehrte Damen und Herren,

leider mussten wir feststellen, dass der Titel Heck, Fresenius, Busch: Repetitorium Anästhesiologie, 7. Auflage, ISBN 978-3-642-36942-1 aufgrund produktionstechnischer Probleme mehrere gravierend falsche Dosierungsangaben enthält. Aus diesem Grund sahen wir uns gezwungen, die Printausgabe aus dem Handel zurückzurufen und das entsprechende E-Book von SpringerLink, Springer for R&D und Springer for Hospitals and Health zu entfernen.

Wir bedauern diesen Umstand sehr und möchten Sie bitten, Ihre Nutzer in geeigneter Form darüber zu informieren, dass das betreffende E-Book aufgrund der genannten falschen Angaben nicht weiterverwendet werden sollte und gespeicherte Kopien sicherheitshalber zu löschen sind.

Eine überarbeitete 8. Auflage des Titels wird in Kürze veröffentlichen werden.

Für etwaige Rückfragen stehen wir Ihnen unter der E-Mail-Adresse libraryrelations@springer.com gerne zur Verfügung.

Wir danken für Ihr Verständnis.

Ich denke ja in letzter Zeit häufiger darüber nach, wie ich mir am Anfang meiner Karriere felsenfest vorgenommen habe, dass ich niemals Dinge programmieren wollen würde, bei denen am Ende Menschenleben auf dem Spiel stehen. Keine Kraftwerke, keine Reaktoren, nichts mit Flugzeugen, keine medizinischen Geräte, Militärkram sowieso nicht. Und dann habe ich Leute getroffen, die sowas programmieren. Und mir wurde spontan klar: Wenn ich das nicht mache, dann machen es Leute, denen ihre Grenzen weniger bewusst sind. Ich habe mich trotzdem weitgehend daran gehalten.

Ausnahmen waren einmal eine Sensoriksache, bei der es um den Säuregehalt im Boden ging, und einmal eine Visualisierung von Vektoren, die aus einem Kernspinbild eines Schädels extrahiert waren und am Ende den Kopf ergaben.

Ich denke da häufiger dran zurück, weil ich mich frage, wo ich diese Schranke eigentlich genau ziehen mus. Diese Erd-Säure-Geschichte ging um eine Agraranwendung, das war vermutlich harmlos. Aber genau wissen tue ich es halt nicht. Die Schädelsache war eine reine Visualisierung, so habe ich mir das damals schöngeredet, aber heute denke ich mir, dass die Ärzte ja anhand der Visualisierung ihre Entscheidungen treffen. Das schätze ich heute als Fehler ein, da mit gemacht zu haben. Am Ende war das nur für einen Demonstrator und hat nie den medizinischen Einsatz gesehen, soweit ich weiß jedenfalls.

Aber wenn ich eine Story wie die hier lese, dann frage ich mich schon, was "produktionstechnische Probleme" sind. Ob man auch die Elemente einer Druck-Pipeline als potentiell lebensgefährdend einstufen muss.

March 26, 2015 08:01 PM

Lustiges Interview zu Glyphosat.Pro-Monsanto-Typ: Glyphosat ...

Lustiges Interview zu Glyphosat.

Pro-Monsanto-Typ: Glyphosat ist ungefährlich, kann man trinken.
Interviewer: Wir haben hier ein Glas, wollen Sie mal?
Pro-Monsanto-Typ: Nein, ich bin doch kein Idiot! Das Interview ist beendet!

Der Pro-Monsanto-Typ ist übrigens der hier.

March 26, 2015 08:01 PM

Dem einen oder anderen Leser ist aufgefallen, was ich ...

Dem einen oder anderen Leser ist aufgefallen, was ich als Medienkompetenzschulungsgründen nur angedeutet hatte. Die Cockpittür hat den Panikmodus wegen 9/11. Das ist eine Antiterrormaßnahme.

Wenn sich also die Geschichte so bewahrheitet, wie sie gerade aussieht, dann hat es in Deutschland mehr Tote durch Antiterrormaßnahmen als durch Terrorismus gegeben.

Nur mal so zum Nachdenken.

Update: Hmm, doof formuliert. Nicht in Deutschland mehr Tote, die sind ja in Frankreich abgestürzt. Eher mehr deutsche Tote. Ich hab aber auch echt fiese wadenbeißende Klugscheißer als Leser! :)

March 26, 2015 08:01 PM

StackOverflow

Lost in libpcap - how to use setnonblock() / should I use pcap_dispatch() or pcap_next_ex() for realtime?

I'm building a network sniffer that will run on a PFSense for monitoring an IPsec VPN. I'm compiling under FreeBSD 8.4.

I've chosen to use libpcap in C for the packet capture engine and Redis as the storage system. There will be hundreds of packets per second to handle, the system will run all day long.

The goal is to have a webpage showing graphs about network activity, with updates every minutes or couple of seconds if that's possible. When my sniffer will capture a packet, it'll determine its size, who (a geographical site, in our VPN context) sent it, to whom and when. Then those informations needs to be stored in the database.

I've done a lot of research, but I'm a little lost with libpcap, specifically with the way I should capture packets.

1) What function should I use to retrieve packets ? pcap_loop ? pcap_dispatch ? Or pcap_next_ex ? According to what I read online, loop and dispatch are blocking execution, so pcap_next_ex seems to be the solution.

2) When are we supposed to use pcap_setnonblock ? I mean with which capture function ? pcap_loop ? So if I use pcap_loop the execution won't be blocked ?

3) Is multi-threading the way to achieve this ? One thread running all the time capturing packets, analyzing them and storing some data in an array, and a second thread firing every minutes emptying this array ?

The more I think about it, the more I get lost, so please excuse me if I'm unclear and don't hesitate to ask me for precisions.

Any help is welcome.

Edit :

I'm currently trying to implement a worker pool, with the callback function only putting a new job in the job queue. Any help still welcome. I will post more details later.

by ch3wb at March 26, 2015 08:00 PM

TheoryOverflow

Are there subexponential algorithms for PLANAR SAT known?

Some NP-hard problems which are exponential on general graphs are subexponential on planar graphs because the treewidth is at most $4.9 \sqrt{|V(G)|}$ and they are exponential in the treewidth.

Basically I am interested if there are subexponential algorithms for PLANAR SAT which is NP-complete.

Let $\phi$ be a CNF formula on variables $x_i$ and the $i$-th clause is $c_i$.

The incidence graph p. 5 $G$ of $\phi$ is on vertices $V(G)=\{x_i\} \cup \{c_i\}$ and edges $(x_i,c_i)$ iff $x_i \in c_i$ or $\lnot x_i \in c_i$.

$\phi$ is in PLANAR SAT if the incidence graph is planar.

Are there subexponential algorithms for PLANAR SAT in terms of $\phi$?

I don't exclude the possibility reduction SAT to PLANAR SAT to make this possible, though SAT still to be exponential and $\phi$ is subexponential because of the increase in the size.

by joro at March 26, 2015 07:51 PM

Dave Winer

The best frameworks are apps

The best software frameworks are apps that do things users care about.

Back in the 80s it was dBASE and then FoxBase. 1-2-3 had a macro language, it was weak, but it was widely used because 1-2-3 was so popular with users.

Today it's WordPress.

And Slack is doing interesting things with their APIs.

Twitter too, but that got kind of muddied-up.

The best one of all of course is JavaScript, a very bizarre language in a totally underpowered environment that reaches into every nook and cranny of the modern world. It's an awful environment, you'd never design one that worked that way, but the draw of all those users makes up for its sins.

Flickr had a wonderful API, still does, but Stewart left the house before it could really blossom as a community thing. See Slack, above.

Chatting with Brent Schlender the other day, I commented that Steve Jobs' politics and mine are exactly opposite. Jobs was an elitist, all his products were as Doc said in 1997, works of art, to be appreciated for their aesthetics. I am a populist and a plumber. Interesting that this dimension of software is largely unexplored. I hope our species survives long enough to study it.

BTW, when ESR saw XML-RPC he said it was just like Unix. Nicest thing anyone could ever say. When I learned Unix in the mid-late 70s, and studied the source code, I aspired to someday write code like that. So well factored it reads like its own documentation.

Today, I'm mainly concerned with getting some outside-the-silos flow going with people I like to read. If we get (back) there, I will consider it a victory.

March 26, 2015 07:47 PM

StackOverflow

Getting a number to show up a certain percent of a time.

I'm looking to build a code to make a number show up 50% of the time, 35% of the time and 15% of the time. I'm quite new to BGscript but I haven't had much luck making this reliable or work at all. Even if you haven't done any BGscript but have done this in an other language. That would be really great!

by Dan Rondeau at March 26, 2015 07:46 PM

QuantOverflow

Error: could not find function "covEWMA" [on hold]

During the code I've got a problem Error: could not find function "covEWMA"

What is the problem?

by Answer22 at March 26, 2015 07:44 PM

/r/netsec

StackOverflow

In Java and using Bridj, how can you dynamically invoke any native function?

Often users of Clojure wish to be as lazy as possible, and delay the creation of classes and objects. In that same spirit, if I wish to invoke a native function from within Java during runtime, I can use com.sun.jna.Function.getFunction("foolibrary", "foofuncname"), which returns a com.sun.jna.Function, which can be invoked.

In Clojure this looks like:

(let [f (com.sun.jna.Function/getFunction "c" "printf")]
  (.invoke f Integer (to-array ["Hello World"])))

Bridj, on the other hand, offers an attractive performance benefit and claimed simplier API, however, it is still not clear to me how to use Bridj to do something similar to the JNA example. Can someone demonstrate how? Also, if this is possible, are there any performance penalties with this approach? Otherwise, it appears generating the Java source file ahead of time is the only solution. I would appreciate it if someone could confirm this.

by bmillare at March 26, 2015 07:27 PM

Is it possible to refactor this scala code

I've got the following function:

def firstFunctionMethod(contextTypeName: String): Parser[FunctionCallExpr] = namespace into {
    namespaceName =>
      function into {
        functionName =>
          functionExprParameters(contextTypeName) ~ opt(secondFunctionMethod(getFunctionReturnType(functionName).get)) ^^ {
            case args ~ subPath => FunctionCallExpr(namespaceName + functionName, args, subPath)
          }
      }
  }

The problem that possible target class has 10 functions with exactly the same code. The only changes is that firstFunctionMethod and secondFunctionMethod is always different

Is it possisble to refactor it?

by ServerSideCat at March 26, 2015 07:27 PM

Lobsters

QuantOverflow

binomial option pricing model - problem with risk-neutral probability

I have a little problem: in the binomial option pricing model, the price of a european derivative security $V_{n}$ satisfies: $V_{n}=[1/(1+r)]*[\tilde{p}*optionUp +\tilde{q}*optionDown]$ where: $\tilde{p}=\frac{e^{r*\Delta T} -d}{u-d}$ But when I read the article "option pricing model" on Wikipedia(http://en.wikipedia.org/wiki/Binomial_options_pricing_model), the $\tilde{p}$ of Wikipedia's $\textbf{algorithm}$ is slightly different: $\tilde{p}=\frac{(ue^{-r*\Delta T} -1)*u}{u^2-1}$ (I take q=0) I try to compare these 2 forms but they are not equal... why ??? Thanks ! :)

by glork at March 26, 2015 07:21 PM

StackOverflow

scala sum By Key function

Given an array as follows:

val stats: Array[(Int, (Double, Int))]=Array((4,(2,5)),  (4,(8,4)),  (7,(10,0)),  (1,(1,3)), (4,(7,9)))

How can I get the sum of the 1st elements of pairs when grouped by key !!! For example, for the key value 4, I've to sum these values 2.0 + 8.0 + 7.0

result = Array((4, 17.0), (7, 10.0), (1, 1.0))

I started by doing this but I don't how to continue:

stats.groupBy(_._1) mapValues (_.map(_._2)) //....

Thanks for help !

by Mohammed Gh at March 26, 2015 07:21 PM

org mode with clojure - can't get export to work

EDIT I've also asked this question on emacs.stackexchange

I'm a relative emacs newbie and have set up my emacs (24.4.1) to work with clojure as described here.

The gist of it is that I am now using the latest org-mode from git and loading it in my init.el (I am using prelude btw) as below:

   (add-to-list 'load-path "~/repos/org-mode/lisp")
   (require 'org)
   (require 'ob-clojure)

I am trying to use org to write a literate clojure program that I can export to markdown. Clojure and babel now work well, evaluation works etc, but when I try to export my org file I get an error.

    load-with-code-conversion: Symbol's value as variable is void: p

The stack trace when I set toggle-debug-on-error is:

    Debugger entered--Lisp error: (void-variable p)
        eval-buffer(#<buffer  *load*> nil
             "/Users/krisfoster/repos/org-mode/lisp/ox.el" nil t)
             ; Reading at buffer position 229233
        load-with-code-conversion("/Users/krisfoster/repos/org-mode/lisp/ox.el"
             "/Users/krisfoster/repos/org-mode/lisp/ox.el" nil t)
        autoload-do-load((autoload "ox" "Export dispatcher for Org mode.\n
            \nIt provides an access to common export related tasks in a         
            buffer.\nIts interface comes in two flavors: standard and 
            expert.\n\nWhile both share the same set of bindings, only the 
            former\ndisplays the valid keys associations in a dedicated 
            buffer.\nScrolling (resp. line-wise motion) in this buffer is done 
            with\nSPC and DEL (resp. C-n and C-p) keys.\n\nSet variable `org-
            export-dispatch-use-expert-ui' to switch to one\nflavor or the 
            other.\n\nWhen ARG is \\[universal-argument], repeat the last 
            export action, with the same set\nof options used back then, on 
            the current buffer.\n\nWhen ARG is \\[universal-argument] \\
            [universal-argument], display the asynchronous export 
            stack.\n\n(fn &optional ARG)" t nil) org-export-dispatch)
        command-execute(org-export-dispatch)

I tried to resolve this by (require-ing the various org export packages, the ones in the clone of the org git repo that is, from within my init.el. But no dice - in fact that generated yet more issues. I have tried debugging but can't figure out what is wrong. I am suspecting I need to be requiring something but don't know what.

I have my init.el here - init.el gist

Any-one have any ideas what I am doing wrong?

Thanks in advance.

by Kris at March 26, 2015 07:11 PM

Idiomatic clojure map lookup by keyword

Say I have a clojure map that uses keywords as its keys:

(def my-car {:color "candy-apple red" :horsepower 450})

I know that I can look up the value associated with the keyword by either using the keyword or the map as a function and the other as its argument:

(my-car :color)
; => "candy-apple red"
(:color my-car)
; => "candy-apple red"

I realize that both forms can come in handy for certain situations, but is one of them considered more idiomatic for straightforward usage like shown above?

by gregspurrier at March 26, 2015 07:05 PM

Referring nested class in asInstanceOf

I am getting not found: value Duck

    class Type
    class Value(val t: Type)
    class Duck extends Type {
        class Val extends Value(this)
    }
    def f(individual: Value) = individual.t match {
        // case t: Duck => individual.asInstanceOf[Value] //this is ok
         case t: Duck => individual.asInstanceOf[Duck.Val] //but I need this
    }

Adding here some details to improve the question quality. Formal quality checks cannot be wrong. If more letters improves your question, it must be the case. Now, my question is much better and can be posted.

by Recognize Evil as Waste at March 26, 2015 07:03 PM

Lobsters

CompsciOverflow

Comparing two graphs, finding vertices that changed their positions [on hold]

I have a task of comparing two organisation charts. These chart objects are described as a set of nodes (people) where each has a unique ID field and a parent ID field (pointing to another node's unique ID). For simplicity we can assume that there's only one node without a parent ID (head of the organisation) and that no node can reference itself. So it's essentially a tree of all the people in the organisation. Comparison should produce two lists - people who left the organisation, people who joined, and people who have changed their position in the organisation. Leavers and joiners are trivial, but I don't know how to proceed with people who changed their position. I need some ideas on how to proceed with identifying people who really moved within the chart from those who only have new parent IDs.

by devmiles.com at March 26, 2015 06:54 PM

Lobsters

TheoryOverflow

Learning read-once branching programs with membership queries

Let $B=\{0,1\}$. A read-once branching program of width $n$ and size $w$ is given by a graph with layers $0,\ldots, n$, where the first layer has just the starting node, the last layer has nodes labeled 0 and 1, each layer has $\le w$ nodes, and each node has 2 edges labeled 0 and 1 pointing to nodes in the next layer. To evaluate the program on $x_1\ldots x_n$, simply start at the start node and follow the edges labeled $x_1,x_2,\ldots$, and read the label on the last node. Fix $n$, and let $w$-ROBP be the class of width $w$ read-once branching programs (and by abuse of notation, the functions computed by them).

Question:

Find an $\text{poly}(n,\epsilon)$ algorithm (or else show it would break some hardness assumption) such that given $f:B^n\to B$ as a black box, it computes $g\in w$-ROBP such that

$$ \mathbb{P}_{x\sim B^n}(g(x)\ne f(x))\le \min_{h\in w\text{-ROBP}} \mathbb{P}_{x\sim B^n}(h(x)\ne f(x)) + \epsilon. $$

In other words, find an agnostic PAC-learning algorithm for $w$-ROBP. Even a weaker bound depending on $\min_{h\in w\text{-ROBP}} \mathbb{P}_x(h(x)\ne f(x))$ would be interesting. I'm convinced that I have an algorithm for when $f\in w$-ROBP (so that the max is 0) but it doesn't generalize to this setting.

A quick literature search shows intractability results for more powerful classes such as width 3 (read-many) branching programs, and learning algorithms for finite automata (which are ROBP's with each layer the same), and not much on ROBP's. I would also appreciate any other results/references on learning for ROBP's.

by Holden Lee at March 26, 2015 06:39 PM

Lobsters

How did you find your second job?

I’m beginning to feel like I no longer enjoy working where I do. My coworkers are stagnant in ability and only really program at work. Most of them are not in my age range and I live in a pocket of the country (far from any big programming communities). Thus, I’d like to start looking for another job. Although this is hard. It’s been ~8 months since I’ve been out of school and I no longer have strong contacts with companies. On top of that I feel like I’d be disappointing people by leaving since I was recently put in charge of a big project. I wish I could’ve told my boss I was looking for a new job and not been placed in charge of this but I feel they would fire me.

The problem I have is responding to emails during the day I feel someone walking by could read them. Also phone interviews could go from 30-60minutes and could add up quick. Then if there is a video chat interview I’d have to go somewhere else or work from home that day which could look “suspicious.”

TL;DR: No longer like current job, don’t know how to go about getting another one.

My question: how did you manage to do it all? Should I feel guilty about potentially leaving this project? Any other advice is welcome.

by howdoipython at March 26, 2015 06:30 PM

/r/compsci

TheoryOverflow

Prove if A is valid then B is valid [on hold]

I have to prove the following problem:

Let $F$ be a set of clauses and let $F' = F \cup {res(C_1,C_2,A_i)}$ be the extension of $F$ by a resolvent of some clauses $C_1,C_2 \in F$ where $A_i$ is a literal occuring positively in $C_1$ and negatively in $C_2$.

Prove that: If $F$ is valid, then $F'$ is valid.

So in other words I have to prove that when I construct the union of the original formula $F$ and the formula resulting by applying resolution on $F$ over a literal $A_i$ in $F$, validity is still preserved.

I think that this should be provable by applying a direct proof.

Suppose $F$ is valid, i.e., for every interpretation $I(F)$ it holds that $I(F) = 1$, i.e. every interpretation is a satisfiable one. How to proceed from here? Do I have to show somehow that the semantics of the formula does not change when I add the formula from the resolution to $F$ ? Can anybody give me a hint how to proceed?

by user1291235 at March 26, 2015 06:21 PM

Lobsters

StackOverflow

Object 'extends' Class Passing Its Own Field/Method?

Given the following class:

scala> class Foo(x: Int) {}
defined class Foo

Is it possible for me to extend Foo from an object using its field/method?

scala> object Bar extends Foo(f) { 
     |   lazy val f = 100
     | }
<console>:8: error: not found: value f
       object Bar extends Foo(f) { 
                              ^

by Kevin Meredith at March 26, 2015 06:05 PM

Fefe

Endlich wissen wir, wieso TTIP toll ist! Achtung, ...

Endlich wissen wir, wieso TTIP toll ist! Achtung, festhalten:
Malmström glaubt an die Vorteile des Abkommens mit den USA, nicht nur von Amts wegen. Eine Einigung über niedrigere Zölle und die Angleichung von Regeln für Airbags, Autoblinker und Klimaanlagen sollen neue Jobs schaffen, aber auch ein geopolitisches Signal senden: Die USA und Europa haben sich auch in Zeiten von NSA nicht völlig auseinandergelebt. "Putin würde TTIP nicht mögen", sagte die EU-Kommissarin kürzlich.
Die sind alle so in ihrer Transatlantik-Rhetorik gefangen, dass sie selber nicht mehr zwischen Realität und Fiktion unterscheiden können. Putin ist der Böse, also ist Putin Ärgern gut!1!!

March 26, 2015 06:01 PM

Bug des Tages: SELinux. Money Quote:So. Yes, thats ...

Bug des Tages: SELinux. Money Quote:
So. Yes, thats correct: The SELinux system that is only there to protect you, passes attacker controlled data to sh -c (https://docs.python.org/2/library/commands.html) ainside a daemon running as root. Sacken lassen...

March 26, 2015 06:01 PM

Kurze Durchsage von Netanjahu:The state of Israel does ...

Kurze Durchsage von Netanjahu:
The state of Israel does not conduct espionage against the United States or Israel’s other allies.
Einziges Problem:
Israel’s claim is not only incredible on its face. It is also squarely contradicted by top-secret NSA documents, which state that Israel targets the U.S. government for invasive electronic surveillance, and does so more aggressively and threateningly than almost any other country in the world. Indeed, so concerted and aggressive are Israeli efforts against the U.S. that some key U.S. government documents — including the top secret 2013 intelligence budget — list Israel among the U.S.’s most threatening cyber-adversaries and as a “hostile” foreign intelligence service.
Der Anlass für dieses Statement war übrigens, dass Obama ein paar Papierchen darüber hat leaken lassen, dass Israel die US-Friedensverhandlungen mit dem Iran abgehört und die Ergebnisse den Republikanern im US-Unterhaus weitergeleitet hat.

March 26, 2015 06:01 PM

Old and busted: Kopfweh vom Elektrosmog.New hotness: ...

Old and busted: Kopfweh vom Elektrosmog.

New hotness: Schwanger vom Smartmeter.

Ich wusste ja, dass Smartmeter gefährlich sind, aber ich habe offenbar den Risikovektor völlig falsch eingeschätzt!

March 26, 2015 06:01 PM

DataTau

/r/netsec

Lobsters

StackOverflow

Playframework - JSON parsing object with single field - definition issue

I cannot find a way how to make it work when deserialized object has single field - I cannot compile the code. Seems that and operator does some transformation and I cannot find a method to call to do the same.

I have following json:

{"total": 53, "max_score": 3.2948244, "hits": [
                                 {
                                     "_index": "h",
                                     "_type": "B",
                                     "_id": "3413569628",
                                     "_score": 3.2948244,
                                     "_source": {
                                         "fotky": [
                                             {
                                                 "popisek":" ",
                                                 "localFileSystemLocation":" ",
                                                 "isMain": true,
                                                 "originalLocation": ""
                                             }
                                         ]
                                     }
                                 }
                             ]
}

I try the following data model to de serialize to:

case class SearchLikeThisResult(total: Int, max_score: Double, hits: Seq[Hits])

case class Hits(_index: String, _type: String, _id: String, _score: Double, _source: Source)

case class Source(fotky: Seq[Photo])

case class Photo(isMain: Boolean, originalLocation: Option[String], localFileSystemLocation: Option[String], popisek: Option[String])

Implicit reads as follows:

object SearchLikeThisHits {

  import play.api.libs.functional.syntax._

  implicit val photoReads: Reads[Photo] = (
    (JsPath \ "isMain").read[Boolean] and
      (JsPath \ "originalLocation").readNullable[String] and
      (JsPath \ "localFileSystemLocation").readNullable[String] and
      (JsPath \ "popisek").readNullable[String]
    )(Photo.apply _)    

  implicit val sourceReads: Reads[Source] = (
    (JsPath \ "fotky").read[Seq[Photo]]
    )(Source.apply _)

  implicit val hitsReads: Reads[Hits] = (
    (JsPath \ "_index").read[String] and
      (JsPath \ "_type").read[String] and
      (JsPath \ "_id").read[String] and
      (JsPath \ "_score").read[Double] and
      (JsPath \ "_source").read[Source]
    )(Hits.apply _)

  implicit val searchLikeThisResult: Reads[SearchLikeThisResult] = (
    (JsPath \ "total").read[Int] and
      (JsPath \ "max_score").read[Double] and
      (JsPath \ "hits").read[Seq[Hits]]
    )(SearchLikeThisResult.apply _)
}

What I am really struggling with is under the _source:

implicit val sourceReads: Reads[Source] = ( (JsPath \ "fotky").read[Seq[Photo]] )(Source.apply _)

where read is reported as unkown symbol - in other cases "and" performs some transformation. Inline definition doesn't help either.

Does anybody faced this before?

by jaksky at March 26, 2015 05:49 PM

/r/netsec

StackOverflow

Apache Spark flatMap time complexity

I've been trying to find a way to count the number of times sets of Strings occur in a transaction database (implementing the Apriori algorithm in a distributed fashion). The code I have currently is as follows:

val cand_br = sc.broadcast(cand)
transactions.flatMap(trans => freq(trans, cand_br.value))
            .reduceByKey(_ + _)            
}

def freq(trans: Set[String], cand: Array[Set[String]]) : Array[(Set[String],Int)] = {
    var res = ArrayBuffer[(Set[String],Int)]()
    for (c <- cand) {
        if (c.subsetOf(trans)) {
            res += ((c,1))
        }
    }
    return res.toArray
}

transactions starts out as an RDD[Set[String]], and I'm trying to convert it to an RDD[(K, V), with K every element in cand and V the number of occurrences of each element in cand in the transaction list.

When watching performance on the UI, the flatMap stage quickly takes about 3min to finish, whereas the rest takes < 1ms.

transactions.count() ~= 88000 and cand.length ~= 24000 for an idea of the data I'm dealing with. I've tried different ways of persisting the data, but I'm pretty positive that it's an algorithmic problem I am faced with.

Is there a more optimal solution to solve this subproblem?

PS: I'm fairly new to Scala / Spark framework, so there might be some strange constructions in this code

by solistice at March 26, 2015 05:35 PM

Lobsters

CompsciOverflow

Travelling with the most efficient path

A friend of mine actually asked me a very interesting computer science related question, and I have been stuck on it for a long time.

The problem is: you have to travel $1000$ km. The only gas station is at the starting point. Your maximum fuel tank capacity is just enough for $50$ km traveling, you are allowed to "bury" fuels in the middle of the journey and save it for later.

For example you can travel $20$ km first, and bury $10$ km worth of fuel there, and then go back to refuel, so next time you can retrieve the $10$ km fuels you left and reach further with it.

You need to find the most efficient way to reach the destination.

What I thought of is using dynamic programming, however you have to assume the distance you travel before each time you do refueling is an integer in terms of kilometers, else you it is hard to do it with DP, I have not try linear programming yet, but I think it is possible.

Do you have any idea how to do it? Or any hints?

Most importantly what type of cs problem is it? is it NP hard? Is it solvable by machine or is it more of a mathematical problem?

Some more thoughts:

  • Since it is a continuous path, asking whether if it is NP might be bit silly, but I am still very curious.
  • $1000$ and $50$ might be deliberately picked to avoid complex computation.
  • Is there a greedy solution? I cannot think of any just yet.
  • I now think it more of a mathematical pattern finding problem, though my friend claims it is a cs problem, so I am decide to keep this post.

And if you have any scientific articles or textbooks related to this please tell me, I do not know where to start in the first place.

by HenryHey at March 26, 2015 05:07 PM

Greedy algorithm for Set Cover problem - need help with approximation

I want to approximate how close is the greedy algorithm to the optimal solution for the Set Cover Problem, which I'm sure most of you are familiar with, but just in case, you can visit the link above.
The problem is NP-Hard, and I'm trying to find a bound on how well does the greedy algorithm perform.
I know it looks a lot, but please bare with me. I pretty much did most of the work, I'm just missing that last small piece.

Here is the pseudo code:

Input: $U$ - set of elements, $F$ - family of sets s.t. $\bigcup_{S\in F}S=U$
Output: $C$ - a family of sets; $C\subseteq F$ s.t. $\bigcup_{S\in C}S=U$

initially C is empty

while U is not empty do:
    choose S from F that maximizes the cover of elements in U
    add S to C
    subtract S's elements from U

return C

The algorithm is pretty straight forward, and it is easy to see that it is indeed polynomial.

This is my attempt to approximate:
Claim 1: In a set $U$ of $m$ objects, that can be covered with $k$ sets, there has to be a set $S\subseteq U$ whose size is at least $\frac{1}{k}m$.
Proof: Trivial. (I decided not to prove it)

A corollary is that given the situation described in that claim, the greedy algorithm will choose a set whose size is at least $\frac{1}{k}m$.

Claim 2: Given a universe $U$, if there exist a cover of size $k$, then after $k$ iterations, the greedy algorithm will cover at least half of the elements, meaning at least $\frac{1}{2}n$ elements.
Proof: By claim 1, in the first iteration the algorithm will cover at least $\frac{1}{k}n$ elements. Upon entering the second iteration, there are at most $n-n\frac{1}{k}$ elements, and so the greedy algorithm will at least cover additional $\frac{1}{k}(n-n\frac{1}{k})$ elements. In general, on the $i$'th iteration, the algorithm will cover $\frac{1}{k}(n-n\frac{i-1}{k})$ elements. So after $k$ iterations:
$$\sum_{i=1}^{k}\frac{1}{k}(n-n\frac{i-1}{k})=\sum_{i=0}^{k-1}\frac{1}{k}(n-n\frac{i}{k})$$
$$=\sum_{i=0}^{k-1}\frac{n}{k}-\sum_{i=0}^{k-1}\frac{ni}{k^2}=n-\frac{n}{k^2}\sum_{i=0}^{k-1}i$$
$$=n-\frac{n}{k^2}(\frac{k(k-1)}{2})=n-\frac{n}{2}(\frac{k-1}{k})\geq\frac{1}{2}n$$

OK, now this is where I need help: I know that in the first $k$ iterations, the algorithm picks at least half of the elements. After another $k$ iterations, another half was covered, out of what's left (meaning another $\frac{1}{4}n$). So in general, I know that the bound is $k\log n$, I just can't figure out how to formalize it.
What formula represents this behaviour? $T(ki)=T(\frac{1}{2^i}n)$ and solve for $i$? It didn't work...
What formula or equation should I solve to actually show that the number of iterations is bounded by $k\log n$?

by so.very.tired at March 26, 2015 05:04 PM

/r/netsec

Fefe

Oh Mann, das ist ja krass. Die Staatsanwaltschaft glaubt ...

Oh Mann, das ist ja krass. Die Staatsanwaltschaft glaubt jetzt, der Copilot habe das Flugzeug absichtlich zum Absturz gebracht. Die Cockpittür war anscheinend nicht nur abgeschlossen sondern im Panik-Modus, der eigentlich für "Terroristen haben die Passagierkabine übernommen" gedacht ist.

Ich hoffe, dass das jetzt mal eine schöne Debatte über die Gehälter und Arbeitsumstände von Piloten anwirft. Sowas kommt ja nicht aus heiterem Himmel. Piloten werden zwar im Vergleich zum Bodenpersonal vergleichsweise gut bezahlt, aber das liegt nicht daran, dass Piloten besonders gut bezahlt werden, sondern dass Bodenpersonal besonders schlecht bezahlt wird. So eine Ausbildung kostet gehoben sechsstellig Euros, und die schießt die Fluglinie vor, wenn man Glück hat. Dafür ist man dann sozusagen deren Leibeigener, bis man das abgestottert hat.

In dem Zusammenhang sei auch mal auf pay-to-fly verwiesen. Piloten müssen nämlich Flugstunden nachweisen, um ihren Job machen zu dürfen. Und für das Privileg müssen sie zahlen. Da zahlt dann der Pilot dafür, seinen Job machen zu dürfen. Nicht anders herum.

Update: Aus einer Mail:

Mein Sohn hat Lufthansapilotenprüfung bestanden. Der Preis der Ausbildung beträgt aktuell 80.000 Euro (hörte gestern: eine 3 1/2jährige Ausbildung bei Siemens kostet, wie gestern von Siemens gehört, auch 100.000). Man muss allerdings auch noch seine Lebenskosten in den deutschen Standorten, wie Bremen selbst tragen. Bafög gibt es nicht, denn ein Lizenzerwerb gilt nicht als reguläre Ausbildung (Sohn sagt: wie Gabelstaplerschein, hätte er zu hören bekommen).

Die Ausbildungskosten hat man nach drei Jahren locker weg, sagt nicht nur die LH, sondern auch Piloten. Schau mal nach den Einstiegsgehältern und den Zulagen.

Also lebenslange Sklaverei ist nicht richtig. Zumal die auch von anderen Airlines abgeworben werden können.

March 26, 2015 05:01 PM